Tech Talk iconTech Talk

Give Social Networking the Finger

Give social networking the finger.Fingerprint authentication isn't just for security anymore. Authentec makes fingerprint sensors for enterprise computers, and their main clients have until recently been the military or any company that really needs to keep its laptops secure.

In my last post about Authentec, I swooned about how they go the extra mile to protect you from finger-truncating impersonators and eyeball-gouging identity thieves. (The company doesn’t simply use a picture of the top layer of the skin; it uses radio frequencies to measure the valleys and ridges of the fingerprint beneath the outer layer of skin, or within the live layer. Because they’re measuring these RF fields within that live layer, a finger that has been separated from its owner won’t work in setting up that first RF field when a user contacts the sensor. Without the attached owner, there's no pattern and the finger is no good.)

Today, Authentec announced that they’re putting those military-grade fingerprint sensors into netbooks. Nothing says top secret like a fluffy little netbook, right? It’s the king of consumer-only applications, a cross between a lightweight laptop and an big-screen iPhone.

Here’s where the fingerprint sensor goes to work for consumer netbooks. Instead of protecting your identity a la The Bourne Identity or Angels & Demons, in your netbook the sensors take on a completely different capacity. They're putting your fingerprints to work for more mundane tasks.

It’s not just the one fingerprint that distinuishes you. The sensor easily differentiates among your ten fingerprints. Their software (called TrueSuite) lets you assign different fingers to different functions, including accessing facebook or twitter accounts, or your email. The program is even able to condense processes that would normally take multiple steps into the swipe of a single finger.

For example, say you want to log into your facebook account. Normally, you wake up your sleeping, locked laptop, type in your OS's password, open your browser, navigate to facebook, and type in your username and password.

With the fingerprint sensor, you skip 4 of those 5 steps. Instead of doing any of the above, you swipe your designated finger. The software reads your finger, and takes care of the rest. You set how it reacts to the swipe of your ten fingers: open gmail, facebook, twitter, flickr, picasa--all you have to remember is what job you gave which finger.

The software

Authentec is also working on the LED lights that surround the sensor, which glow when giving you a notification. Normally these would have limited use; i.e., if you swipe the wrong finger or you’re the wrong person, you'd get a red blinking light, if you did it right you'd get a green light. But the Authentec people devised a few new uses for these LEDs. You can set your own colors the same way you set the actions for your fingers.

Say you’re taking some time out of your busy schedule for an important episode of Walker: Texas Ranger. Your laptop has long since gone to sleep and locked itself. To find out if you have mail, you’d normally have to stand up (all the way!), walk across the room (nooo...) and wake up and unlock your computer. That could take up to 10 seconds! But, with this app, you can glance across the room and see that you have a red flashing LED, which means there is a message waiting from your boss, or a blue flashing light indicating a note from your mother. Granted, you still have to move the muscles that control eyeball directionality, but there’s no such thing as a free lunch.

This is probably the best thing that could have happened to fingerprints. I think it’s not such a bad idea to take fingerprint authentication out of highly secure environments and repurpose it for more mundane applications. Fingerprints will become almost meaningless as a security measure within less than 20 years.

The two main things that will undermine security at every turn:
1) Poor adminstration. Read this slashdot post to understand-- biometrics are just databases, and databases need to be securely and competently administered.

It's too difficult to manage a 2000 or even 200 member authentication database. The simplest administration is just not done because it is tedious or takes too much time. ... You have the human being that lets everyone into the building, security guards that think you work there because they've seen you before, meeting rooms filled with all-open network connections and a bunch of people that write down their password on a sticky note, even if it's as simple as their husband's name, brand of monitor or keyboard or something else.

2) Time. The younger you start the less secure your fingerprints will inherently become: "Many people are trying to regard biometrics as secret but they aren't. Our faces and irises are visible and our voices are being recorded. Fingerprints and DNA are left everywhere we go and it's been proved that these are real threats." Slashdotter Kadin2048 commented that

The fact that you can't change your fingerprints is a real problem if they start to use biometric systems for authentication. Particularly since there are biometric-ID systems used by children: in my area, they're currently testing and preparing to roll out a school-lunch system that uses fingerprints (it's a debit system -- no more stolen lunch money, and no way to tell who's on the subsidized lunch program or not). When you start using biometrics that young, you have a long time for them to possibly get compromised and spoofed.

The fingerprints you have, you own for life: so any system has to be built on the assumption that they will be compromised. In particular, future systems should be built knowing that people are going to come in who've already had all 10 fingerprints compromised already. The solution isn't to just come up with more biometric identifiers to use as secrets, the solution is to not use them as secrets at all.

Biometric identification can be used for convenience or for security, but it's probably best not to try both.

Telcos Spend Much, But Are They Spending Smart?

For the 2nd quarter of 2009, Sprint lost almost a million regular (“postpaid”) subscribers. The loss was partially offset by 777,000 prepaid subscribers, but in addition to the net loss, prepaid subscribers generate an average revenue per user per quarter of $34, while “its ARPU for postpaid plans was $56,” according to an excellent summary by NetworkWorld. No wonder Sprint lost $384 million in the quarter.

And Sprint is hardly alone. The much smaller Vonage lost 89 000 customers (download its Q2 report here ), “Churn rose to 3.2 percent from 3.0 percent in the prior year's quarter.” U.S. Cellular 88 000 (Q2 report here) from a customer base of 6.2 million.

Sprint's losses come despite spending well over a billion dollars annually in marketing. Did you ever get the feeling that the huge sums that phone companies—especially cellphone carriers—spend on big advertising campaigns should instead be used to, oh, say, train their customer service employees or build more base stations?

A group called the CMO Council has the same feeling, and they've backed it up with an 80-page study. The report notes that the companies within the $4 trillion telecommunications industry are under great stress—companies that used to concentrate on doing one thing well—such as voice calls on a cellular network—now have to provide data services, broadband connectivity, multimedia messaging, FM radio reception, and so on. “In 2009, mobile phone users are expected to download over 10 billion applications to their mobile phones.”

Competition is fierce, and not just among cellular carriers. “Even as traditional phone companies controlled more than 83 percent of the North American market for voice services,” the study says, “competition with cable providers had saved consumers more than $23 billion and could save households and small businesses a total of $111 billion over the next five years.” Of course, competition saves customers money because it gives customers the ability to switch from one provider to another. Looked at from the carrier side, that's a deadly increase in the churn rate.

The study cites the analysts at McKinsey for two key facts:

  • Satisfying and retaining current customers is three to 10 times cheaper than acquiring new customers, and a typical company receives around 65 percent of its business from existing customers.
  • A five percent reduction in the customer defection rate can increase profits by 25 to 80 percent, and seven out of 10 customers who switch to a competitor do so because of poor service.

And yet, are companies truly fighting to retain customers? Two more stats:

  • A Gartner study found that 92 percent of all customer interactions happen via the phone, and 85 percent of consumers are dissatisfied with their phone experience.
  • A typical business only hears from four percent of its dissatisfied customers; the other 96 percent leave quietly. (University of Pennsylvania)

In a phone conversation last week, CMO Council executive director Donovan Neale-May told me “there is a disconnect between marketing, and the back-end IT service delivery groups.”

“So marketers are saying, hey, we're spending, in this case, when you look at how much money is being spent by these large telco wireless operators, I mean, the top advertisers like AT&T, they're spending over US $3 billion. Verizon, over $3 billion. Sprint $1.5 billion. Comcast, $670 million. DirectTV, $450 million. They're spending a lot of money on marketing.

“And they're saying it's costing them more and more to acquire and keep and they're also seeing greater churn rates. So you're spending large sums of marketing money on your brand, and on a promise or a claim, announcing and delivering new services, new plans, new pricing, new devices, new applications, yet there's a dissatisfaction—a high level of dissatisfaction—with unmet needs and expectations with products and services, usability and complexity, with billing problems...

“So on the one hand we see massive outlays of money, for demand generation, and for branding, and for making people feel good and nice about these brands, yet the marketing people aren't doing what they should be doing, which is interacting more with the different stakeholders within the operation, and engineering, and technical side of things to improve the billing and the financial side to impove the customer experience.”

Neale-May says it's not how much a company is spending, it's how. One last stat from the CEB report:

  • Companies that restructure call centers around a customer service strategy often cut their costs by up to 25 percent and boost the revenue they generate by as much as 35 percent. (McKinsey)

 

Highlights from National Instruments Week 2009

The annual NI Week was full of cool demos. There was the scarily-named robotic "flying blade of amputation," a robotic keyboard and glockenspiel duet, and the always-popular Guitar-Hero playing robot (with a new twist). This video provides a taste of the best demos and highlights from the keynote sessions.

As always, NI debuted some new products, and gave previews of what's still to come. The most impressive of the future-features was the thin-client web interface that will soon be part of their LabView software. NI already has hardware that runs web services accessible via URL. But the next generation of LabView will allow users to build a web interface right in their browser, without knowing flash or Java. The best way to understand the new capability is to watch the demo below. I've edited it down in length to give just the essentials of the technology without all the pageantry of the keynote.

Nanotech Agreement between IBM and Bulgaria Put on Hold

I suggested at the end of May that it appeared as though IBM was starting an entirely new line of business. Small countries were turning to IBM to jump-start their nanotechnology initiatives.

Well we have seen the first of these announced partnerships fall to the wayside due to the inability of Bulgaria to pay for the expense.

It seems that Bulgaria’s state spending has increased by 30% in the first half of 2009. My speculation is that this increase in spending is likely due to paying unemployment compensation for those laid off during the worldwide recession, and trying a bit too hard to spend their way into joining the Eurozone. In any case, the finance minister started looking around for ways to cut the budget and nanotechnology rose to the top of the list.

If there is a lesson to be learned here, IBM has discovered that one of the drawbacks of doing business with small economies is that sometimes they just can’t pay the bill.

Fantasy Sports Prep: Semiconductor Edition

IMAGE CREDIT: Wikimedia user Inductiveload

 

I'll be at Hot Chips in late August, along with the rest of the engineering press (eat your heart out, Perez!). In some ways you might say Hot Chips kicks off chip season, which continues to the International Electron Devices Meeting (IEDM) in December and culminates in the superbowl of chip-talk at the International Solid State Circuits Conference (ISSCC) in February.

In case you want to do some research for your fantasy semiconductor team, Real World Technologies' David Kanter has compiled an excellent post-game analysis of the 32nm process technologies rolled out at IEDM 2008 and ISSCC 2009. In particular, he discusses the semiconductor industry's move to 32nm manufacturing processes with high-k dielectrics and metal gates (HKMG). (I'd like to point out here that it's not all about process technology: the cyborg moth at ISSCC was so creepy I thought I had woken up in a dystopian Arnold Schwarzenegger movie. Is that reference dated now? Who's the new Arnold-- anyone care to update my pop culture database? But I digress.)

David was kind enough to summarize the article for me. However, you should still go read the whole thing yourself because this is the version for people who are easily distracted by robo-moths and Arnold Schwarzenegger's falling star.

New manufacturing technologies are essential to keeping Moore's Law on-track and driving continued advances in microprocessors, graphics processors, FPGAs, ASICs, networking and other industries that rely on semiconductor technologies. At IEDM 2008 and VLSI 2009, leading edge manufacturers announced their initial results for 32nm process technologies, discussing key techniques including transistor strain, immersion lithography, double patterning and for some, custom illumination.

The process technologies analyzed include:

1.  IBM and AMD's research work on a HKMG 45nm process using silicon-on-insulator (SOI), which is not expected to go into production.

2.  IBM and the Common Platform's HKMG 32nm bulk process

3.  Intel's high performance HKMG 32nm process, slated for production at the end of 2009

4.  TSMC's performance optimized HKMG 32nm and 28nm process expected in 2010

5.  Intel's low power 45nm process for SOCs, the first low power process to feature HKMG

6.  Toshiba and NEC's HKMG 32nm density optimized process, which currently uses custom illumination, rather than double patterning

7.  IBM and AMD's high performance HKMG 32nm SOI process, expected to debut in late 2010.

The results for each process include key density metrics such as contacted gate pitch and SRAM cell size and transistor performance metrics such as NMOS and PMOS transistor drive strength.  We include a historical comparison that puts these newer manufacturing technologies into a historical perspective, going back as far as 130nm.  New to this year's coverage of IEDM and VLSI is a graphical comparison of density and performance for various 45nm and 32nm process technologies.

Of particular interest are several facts: First, the rest of the industry, including IBM and AMD will finally catch up to Intel's manufacturing technology, using high-k dielectrics and metal gates, at 32nm. Second, approximate parity between Intel, AMD and IBM for manufacturing CPUs. And finally, approximate parity between TSMC and the Common Platform for bulk foundry processes.

I for one would like to read more about Globalfoundries, but I imagine we'll get an eyeful of that from everyone over the next six months.

Nanotechnology Provides the "McGuffin" for Summer Movie Blockbuster

To those of you not familiar with the term “McGuffin,” according to Alfred Hitchcock, the term comes from the story of two men traveling on a train.

One man asks the other what is that you’re carrying in your luggage. The other man responds by saying it’s a McGuffin. When the first man asks what a McGuffin is, the other says it’s a gun for hunting lions in the Scottish highlands. The first man, nonplussed, responds that there are no lions in the Scottish highlands to which the other man quickly replies than that is no McGuffin.

In other words, a McGuffin is an empty an almost entirely meaningless plot device.

It seems that nanotechnology is becoming the new McGuffin for silly Hollywood action movies with the release of this summer’s blockbuster “G.I. Joe: Rise of Cobra.”

In this case, the McGuffin are little nanobots that are put into a warhead and then launched at a target where they begin to devour the target until they are turned off by remote control. In the trailers you can see the green swarm of nanobugs devouring the Eiffel Tower.

I am afraid nanotechnology is not fairing too well in popular culture it always seems to be a threat whether it be Michael Crichton’s “Prey” or the television program “Eleventh Hour”.

I guess it’s hard to make cleaner drinking water, cheaper alternative energy or better anti-cancer drug treatments into an exciting and compelling plot element.

Is More Being Spent on its Toxicology than on Nanotech Itself?

As nanotech businesses fall deeper into the abyss, and the high-flyers sink into non-existence, you would think with all the government talk of nanotech being the future and stimulus dollars being spent that one would hear about some actual money being spent to keep nanotech companies going.

Sure, billions are invested in government and university research labs that are supposed to lead to some commercial applications. But they never do because the road from a lab prototype to a commercial product is expensive and there are little or no financial means of support to carry it off.

Shiny and new research centers producing all sorts of research ranging from the “who cares” to the “could be significant” abound but the commercialization of nanotech continues to flounder.

Meanwhile there is an endless chorus of the possible dangers of nanomaterials and their proliferation in consumer products. The last bit always leaves me scratching my head. I could have bought ‘nano pants’ 5 years ago and 5 years later that’s still about all I can buy if I go out looking to buy a nanotech-enabled product.

Despite these sinking fortunes for nanotech companies and products, some researchers manage to strike it rich. A professor at Duke University has just received $14 million to figure out the possible detrimental effects of silver nanoparticles, particularly those used in antibacterial socks.

Well done! Chapeaux! And all of that, but imagine if that kind of money was spent to support companies trying to bridge the financial chasm between a prototype and a product, I might actually be able to go out and buy a pair of nanosilver socks.

Nanotechnology Adds to Police Arsenal Against Impaired Drivers

Aside from agility tests, police have had no technological way of detecting the use of controlled substances by drivers other than for alcohol. Cheech and Chong could merrily drive down the highway in their van made out cannabis and be stoned out of their minds and there was little that the police could do to prove it.

According to this rather colorful article, which uses terms like “stoners and dopers,” Philips has developed a hand-held device that employs nanotechnology based on the use of electromagnets and nanoparticles to “separate the sober from the impaired”. 

The article points out that the Netherlands-based Philips will roll this out initially in Europe. But oddly the article raises the specter of the device being a “privacy-invading drug tester.” I am not sure how much privacy you are entitled to when driving impaired on a public road, but in any case I sure this is not the kind of invasion of privacy caused by nanotech that has some so concerned.

Robots: The Expensive Way to Prepare Cheap Food

If you've ever watched the giant container loaders in Elizabeth N.J. or Yokohama Harbor, you've probably wondered if the same robotic technologies could be used to make ramen soup.

Okay, maybe you never have, but someone seems to—Kenji Nagoya, said to be an industrial robot manufacturer and owner of a new fast-food restaurant where bowls of ramen in pork broth are prepared almost entirely by a pair of robots that look, to me at least, a bit like the container loaders I see from the New Jersey Turnpike.

In a widely copied Reuters video report, Nagoya says, “The benefits of using robots as ramen chefs include the accuracy of timing in boiling noodles, precise movements in adding toppings and consistency in the taste."

The robots are reported to be able to make only 80 bowls a day (though the automated process, which includes heating but not making the broth, is said to take less than 2 minutes). They sell for $7 apiece. That gives the shop a total daily revenue of $560, which has to cover the cost of the ingredients, electricity, rent, and some humans make the broth, serve customers, take their money, and so on. And the robots themselves of course.

The shop therefore doesn't make a profit for Nagoya, but it's a great proof of concept and might someday lead to restaurant robots inexpensive enough to replace all those inprecise high school students currently preparing our fast food. (By the way, it's unclear to me whether Nagoya has anything to do with the soon-to-be-closing robot musuem in the town of Nagoya.)

There's an additional video of the Nagoya ramen robots here.

The Nagoya robot story has completely overshadowed a robot “somewhere in Yamanashi” Japan that also helps make ramen soup. Restauranteur Yoshihira Uchida, for whom the robot was created, had the exact opposite strategy of having the robot custom-prepare the broth with “40 million recipes”—combinations of broth ingredients—while a human chef makes the noodles.

Five Years After the Release of Royal Society's Nanotech Report

I guess I have become inured to the idea that there is little synthesis on the issues of the day rather only antithesis. That’s why five years after the release of the Royal Society and Royal Academy of Engineering report “Nanoscience and nanotechnologies: opportunities and uncertainties”, I am not surprised that it seems as though we haven’t progressed much far beyond name calling regarding the safety of nanomaterials.

The Responsible Nano Forum, which has been quite busy of late with the launching of their new Nano&me website, has just released a report  putting the last five years into some kind of perspective.

Andrew Maynard on his 20/20 Science blog has followed up on this report providing his own perspective on the situation, which scans about right.

While I can appreciate the arguments on both sides (to an extent), it all seems so needlessly polemical.

On the one hand you have Allan Shalleck over at Nanotech-Now  arguing that the media have been following sensational headlines and have missed the other side of the story which is there has been zero reported health-related issues caused by nanomaterials…thus far.

Then on the other hand you have some NGOs claiming that the environmental benefits of nanomaterials, such as pollution remediation and clean drinking water, which have been used to counterbalance their reported potential negative effects, are over hyped.

Meanwhile you have everyone trying somehow to engage the public on the issue of nanotechnology, or at least get them mildly interested.

Aside from the fact that we don’t really have as much hard experimental data today on the safety of nanomaterials as one might have expected five years ago when the RS and RAE report was published, perhaps of more interest is that the general public not only doesn’t care, but they still don’t even understand what nanotechnology is. Who can blame them really?

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More