Tech Talk iconTech Talk

null

The Videogame Circus Begins

Last night in Los Angeles, Microsoft herded journalists into a big room, dressed them in weird ponchos, and set loose a troupe of Cirque du Soleil dancers overhead.  

Oh yeah, and it all had something to do with videogames.

The occasion was the kick-off of the annual Electronic Entertainment Expo - the bachannalian convention better known as E3.  Microsoft wasted no time hyping The Motion Capture Camera Formerly Known as Project Natal, now christened Kinect.  If you've been reading this blog, then you know that Kinect isn't new - I was one of the journalists who got a hands-on preview of it at last year's E3.   But now it's going wide - expected to hit the Xbox 360 in November.  There was little in the way of games shown last night, but I'll be at Microsoft's press conference today to hear/see more.

Despite the fact that no new videogame consoles are hitting this year, the convention promises to be heavy on hardware hype.  The Kinect camera will be getting much of the buzz, along with Nintendo's new 3D handheld system, the 3DS and, in third place, Sony's after-the-fact motion sensing Move controllers.  Yes, this means a host of new kinds of game experiences on the horizon - but the horizon may be further off than it seems.  A true golden age in gaming - like the one first coined in the 80s - really has nothing to do with technology.  It's about iconic and addictive games that don't necessarily look or feel that impressive at all (see Pong, C64, etc.). 

The last "revolution" in gameplay came with the Nintendo Wii, and some of the most unlikely and ubiquitous titles for that game were not even a faint dream at launch (Wii Fit, for example).  I suspect it'll be several months before we really see games that deliver on the HUGE hype behind Kinect and company.  

Cheap, wireless, automatic backyard sprinkler control

Maybe it’s just because as the editors of IEEE Spectrum have lately been paying a lot of attention to the problem of water conservation, and the coming clash between water and energy, but right now any easy, inexpensive way to cut back on water usage seems like a good idea to me.

And Digital Sun, a startup company based in San Jose, Calif., purports to have such an idea (I have yet to try it for myself). They presented their product, at Launch: Silicon Valley, an annual conference sponsored by the Silicon Valley Association of Startup Entrepreneurs, held Tuesday, June 8, in Mountain View, Calif.

Digital Sun has developed a system that includes sensor that goes into a hole you cut in the dirt with a wireless receiver that you attach to your existing sprinkler control box. The sensor uses a proprietary wireless communications protocol over a very low power 2.4 GHz signal, sent through the dirt, to override the sprinkler timer if it’s due to start watering cycle already damp ground. Digital Sun CEO Dale Hitt explained the technology to me, in the video below.

Right now, the basic package—one sensor, the receiver, and a tool that cuts a hole in the ground for the sensor—retails for $200. The company attended the Launch conference in hopes of attracting enough venture investment to move their manufacturing offshore, which would enable them to cut their price below $100 and get into Home Depot and other low-cost retailers.

I think they’re on the right track. In fact, I might suggest their product to a few neighbors, whose sprinklers seem set to “create swamp” rather than “water lawn.” Just a thought.

Competition for E-Ink?

The e-reader market took the company E-Ink and its low-power, easy-on-the-eyes digital paper technology mainstream. But no one says E-Ink is perfect; the displays, to date, don’t do flexibility or full color well. And they aren’t cheap enough to move into budget-conscious applications, like the long-dreamed of grocery store shelf tags that could be updated remotely to display new prices.

E-Ink and its brethren continue to advance down their technology development paths. But a startup company based in Saratoga, Calif., says they’re heading in the wrong direction.

The folks at Zikon have figured out a way to make electronic ink out of nanoparticles that don’t need to be packaged in microcapsules to work. Encapsulation, they say, is one of the big reasons today’s electronic ink-based displays are expensive to produce. And Zikon’s unencapsulated particles are so tiny that instead needing a liquid medium in which to float, they can move around in a porous material, kind of like, well, paper.

That means that E-ink can create a high contrast display by using a white background. And that the manufacturing process is similar to printing, a cheap and well-established method.

Zikon says this new form of electronic display may have applications beyond shelf labeling and flexible reading materials, like color-changing fake fingernails. Really. Zikon’s CEO Mateusz Bryning tells me about the technology—and the fingernail application, in the video below, recorded at Launch: Silicon Valley, an annual conference sponsored by the Silicon Valley Association of Startup Entrepreneurs, held Tuesday, June 8, in Mountain View, Calif. Zikon was one of about two dozen companies selected from 400 applications to present its ideas to the venture capital community at the conference.

Bursts of Low-intensity Ultrasound Make Neurons Fire.

Neuroscientists are well accustomed to making neurons fire artificially by shocking them and doping them. Indeed it's the backbone of most neurological therapies. Now, it seems, we can do it with just sound.

Bioengineers at Arizona State University published an article in Neuron today (it's free online) in which they demonstrate the ability to stimulate neuronal action potentials (electrical impulses) by applying bursts of low-intensity ultrasound to the mouse brain. Other people have shown that this is possible to do in brain tissue, but the Arizona lab claims to be the first to make it work through the skull in a live animal.

If such a technique is to become therapeutically viable it will have fierce competition from another stimulation strategy that excites neurons through the skull with either direct current or electromagnetic induction (called tDCS and rTMS respectively). These two approaches have spawned a veritable deluge of research, raising hopes of alleviating migraine pain, depression, and attention deficit disorder, to name just a few. Despite a lot of encouraging results, rTMS and tDCS have pretty terrible spatial resolution and this is precisely where ultrasonic stimulation may be able to compete and contribute.

In the experiments published today, the researchers looked at spatial resolution in two different ways. First they stimulated areas of the brain that control movement and found that they could isolate specific muscles. Point pulses of ultrasound at one part of the motor cortex and the paw twitches, move it slightly and the tail jerks. This alone is more precision than has been shown with electrical stimulation. (There's a link to a movie for those who can stomach research on restrained mice).

But the group went further and analyzed the biochemistry of the brains to see exactly what parts of the tissue had been stimulated. Their results suggest that ultrasound can be used at a resolution that is about 5 times better than rTMS. They also estimate that they could successfully use 0.5 MHz of ultrasound to stimulate brain regions that are 1 millimeter wide and less.

However, it's not as clean as all that. The sound waves seem to reflect in some instances and can stimulate the tissue unpredictably.

As of yet, there is no solid hypothesis to explain how the ultrasonic waves cause neurons to fire. The most convincing theory is that it produces enough mechanical stress on ion channels to open them. Normally these channels remain gated until the electrical potential across the neuron's membrane changes enough to fling them suddenly open and initiate the cascade effect we call an action potential.

Whatever the mechanism, the side effects on the cell seem to be minimal. Basic tests for cellular death showed no increases after applying the ultrasound.

Organic LEDs Head to the Race Track

One car in this weekend’s 24 Hours of Le Mans endurance race, in France, will be sporting a new kind of sponsor logo: one lit up by flexible organic light-emitting diodes (OLEDs) that are integrated directly into the carbon fiber body of the car’s rear view mirrors.

It will be the first real-life application of a flexible OLED device, according to engineers at the Holst Center, an R&D organization in Eindhoven, The Netherlands, which created the light for the French racing team Oreca.

Today’s commercial OLED products, which are still expensive and few and far between, use inflexible glass to protect the organic elements.

But this weekend's race should provide an ideal testbed for flexible OLEDs, because of the extreme conditions of the race, says Ton van Mol, who heads up OLED research at Holst. Not only does the car have to go fast, he says, but it also has to last for 24 hours. OLEDs have gained popularity in research circles as lower power alternatives to other lighting and display sources like LCDs—the race car’s OLEDs run on just 6 to 8 volts—and because they can be made into thin, flexible sheets, which makes them ideal for area lighting, or even a “flat lamp” to carry around in a purse. Ideally, they could eventually be printed using a roll-to-roll process, like newspapers, which would make production relatively cheap.

But because they’re organic, OLEDs are very sensitive to water and oxygen (i.e. air), and they degrade fast. So they need excellent barrier layers built in to protect the organic layers from the outside world—layers that need to work a million times better than the aluminum barrier in potato chip bags, van Mol says. That makes them very, very expensive.

While glass is easy to use and works well as the barrier—hence its use in products like OLED TVs and a few new Samsung smartphone displays—it misses the point, because it’s not flexible. So Holst researchers are working to perfect flexible, multilayer barriers in their OLED stacks to keep them well protected.

Integrating OLEDs straight into the French team’s car (rather than sticking them on to the surface) is made possible by Holst’s collaborator in the project, Switzerland-based Huntsman Advanced Materials, which figured out the encapsulation technology using composite materials from its Araldite brand. The OLEDs on each mirror will actually read “Araldite.”

And, says van Mol, because the race is 24 hours, meaning half of it is in the dark, the Oreca car will be the only one that can show off its sponsor logos throughout the whole race.

That is, of course, “if it lights,” van Mol says. (He's not actually worried on that account.)

There’s still much work to be done on perfecting the barrier layers, in addition to other challenges like increasing OLED efficiency and lifetime, and reducing costs. But in the meantime, it’s off to the races for Holst’s OLED lights. Hope they shine bright.

Note: Costs of press visit to Imec research centers in Leuven, Belgium, and Eindhoven, The Netherlands, were covered by Imec.

Imec gets a bigger fab, kicks off lab expansion project

European nanoelectronics research center Imec is growing up.

Yesterday, the Leuven, Belgium-based center officially opened an extension of its cleanroom, which gives it an extra 1200 square meters of "ultraclean" space for state-of-the-art chip fabrication.

The added area is also vibration controlled, in preparation for receiving the latest extreme ultraviolet lithography (EUVL) tool—ASML's pre-production NXE:3100 scanner—by the end of 2010.

Yesterday also kicked off the construction of additional lab spaces for Imec’s research on silicon and organic solar cells, and for projects on cutting edge biomedical electronics.

And later this year, the company will start building a new, 16-story office building designed by Austrian architecture firm Baumschlager-Eberle. It will house 450 people, an auditorium, and smaller labs.

Rising above the trees at the entrance to the city of Leuven from the Brussels road, the new Imec tower (artist rendering above) will serve as an "icon" of the tech center, Leuven’s mayor, Louis Tobback, told local dignitaries and Imec partners at yesterday’s ceremony. It will be a symbol of the "self-confidence" of the region, and of course, he added, it will also demonstrate who is mayor.

From the press release:

With these extra 18,000m2 cleanroom, lab and office space, imec will have a research campus of 80,000m2 that can stand the comparison with any other high-tech research center worldwide. As such, imec aims at playing an important role in the growth of the Flemish high-tech economy.

Vice minister president (second in charge) of the Flemish government, Ingrid Lieten, was on hand to cut the ribbons at the cleanroom’s opening, along with Imec president and CEO Luc Van den hove, Mayor Tobback, and Intel Labs Europe director Martin Curley, among others (photo at top).

The expansion announcements opened Imec’s two-day Technology Forum, held in Antwerp, Belgium.

Below, Van den hove, Lieten, and Tobback at the fab. 

 

The already-hard-at-work part of the fab is shown above.

Note: Costs of press visit to Imec research centers in Leuven, Belgium, and Eindhoven, The Netherlands, were covered by Imec.

Germany Embraces 4G

Germany has taken a first big step toward next-generation wireless Internet by becoming the first country in Europe to auction off a sizeable chunk of spectrum to deliver the new high-speed services.

Now users in Europe’s largest mobile communications market will have to wait and see if the operators deliver the goods with 4G (fourth-generation) services, such as LTE (Long-Term Evolution) and WiMAX. Operators’ track record with the much hyped third-generation UMTS (Universal Mobile Telecommunications Service) isn’t anything to brag about. The rollout has been slow and coverage is still patchy. As for the “broadband” speeds and “killer applications,” well, we’re still waiting for them.

The mobile communications spectrum auction in Germany, which ended with little fanfare last week, raised nearly €4.4 billion ($5.4 billion). While the German government can be happy over every additional euro it receives, the total falls far short of the €50 billion generated in the UMTS auction held at the height of the Internet bubble in 2000. Even experts expected more. The accounting firm KMPG, for instance, estimated that the auction would pull in €6 billion to €8 billion.

A big reason for the lower spectrum prices was competition or better a lack thereof. Only the four existing operators participated in the auction. Unlike the 3G auction, this one had no newcomers craving spectrum – the life line of mobile operators – to drive up bidding.  Operators in the 4G auction behaved like gentlemen: They bid but they didn’t go on a binge.

The German government auctioned a total 358.8 MHz of paired and unpaired spectrum in the 800 MHz, 1.2 GHz, 2 GHz and 2.6 GHz bands. In all, it sold a total 41 blocks to Germany’s for existing operators: Deutsche Telekom, Vodafone, Telefonica’s O2 and Royal KPN’s E-Plus. The first three of these operators gobbled up the 800 MHz frequencies, the so-called “Digital Dividend” bands, which had been used for analog TV. The lower frequencies are coveted by operators for two big reasons: wider geographic coverage and better in-building penetration. Both of those benefits convert directly into cost savings – fewer base stations to cover larger cells and no need for picocells and other systems to amplify signals indoors.

E-Plus acquired additional spectrum to increase capacity in urban areas. It’s not clear what 4G strategy the operator is pursuing, if at all. The 2 MHz spectrum it acquired points to a possible WiMAX deployment. The spectrum license are “technology neutral,” meaning that operators can pick their technology. But selecting a 4G technology may not be the issue: Rumors are afloat that E-Plus may be put on the block and that Telefonica could be interested.

By comparison, the plans of the other three are pretty clear: they’re headed down the LTE path. All three see broadband mobile Internet as crucial to sustainable growth.

But the German government, after learning a lesson from the slow and patchy rollout of 3G services, has thrown owners of the new spectrum a bit of a curve ball. They have to deploy wireless networks in step-by-step phases. In the first phase, the must build networks covering 90 percent of the population in villages under 5,000. The second phase requires similar coverage in villages from 5,000 to 20,000 and the third phase from 20,000 to 50,000. Only after they’ve gotten their feet dirty out in the sticks can they move into the more lucrative large urban areas.

The German government is serious about increasing broadband connectivity in rural areas over the next couple of years. And this policy is clearly one way to achieve this.

It will be interesting to observe over the coming weeks and months how operators plan to tackle this rollout – how they plan to blend LTE into their existing UMTS and HSPA (High Speed Packet Access) networks and how much infrastructure sharing will occur among them.

One thing is for sure: Demand for high-speed mobile Internet services is growing, thanks in no small part to the iPhone; the devices have made using these services easy and fun – both sorely missing in UMTS. But as the iPhone has also clearly shown, particularly in the United States, it’s one thing to create demand; it’s another to satisfy.

That’s where operators see their core need for LTE. They don’t need any one or any more killer applications. Rather, they require greater capacity, which they have now received, and technology, LTE, that makes optimum use of precious limited spectrum.

I was among the first users of GSM, UMTS and HSPA in Germany. I loved the first, could have passed on the second and am definitely happier with the third. But I must confess, I’m really looking forward to the fourth.

null

Hacking a Ferrari

A souped- up version of Doom II, the classic 1994 first person shooter from id Software, is out today for download on the Xbox Live Arcade.  In addition to 5.1 surround sound and high-definition graphics, it's sporting online deathmatching and cooperative play (and how cool is that?).

When I was interviewing id co-founder and tech whiz John Carmack for my book Masters of Doom, he told me that he had a passionate side project during the Doom II development: souping up his fleet of Ferraris. As any car geek knows, Ferrari doesn't look too highly on gearheads hacking their rides - but this is Carmack we're talking about. And he put as much of his engineering chops to work on his cars as he did his games. In honor of the Doom II re-release, I thought I'd share a Q/A I did with Carmack about his early Ferrari hacking days - how it worked, and how much it cost.

David Kushner: When did you first start modifying your Ferraris?

John Carmack: After Doom shipped, I had just been looking at getting the bigger Testarossa. My Ferrari 328 at the time probably had 400 something horsepower. It was a fast car, faster than any normal thing that you’d be able to buy on the showroom. But there probably was a bit of that “well, what’s the next step?” We could continue doing some things to my 328. We could have put on intercoolers and changed the pistons and done all that, but it was at a pretty nice drivable state for everything after we worked out all the tuning issues. Eventually I pretty much looked it over and said okay I want to get a Testarossa, that was always my intention from the beginning there. I’m going to get a Testarossa and have Bob [my mechanic] do the twin turbo inter-cooled engine work on it and get that all the way put up. Eventually we found a car. I had shopped some at the local Ferrari dealer, where I had bought my 328, and I was looking at a Testarossa from them but Bob found me one that he got shipped in for a cheaper price.

I drove it for a little while but it was the intention that as soon as I’m ready it’s just going to get parked and get all the engine work done on it. I did twin turbo intercoolers. At that time we left the engine block alone. It was a completely new intake and exhaust system which put it up to 800 horsepower, and we kept it like that for a while. I think we put the nitrous on there. Eventually we had an engine meltdown at high speed, so at that point were like okay, now we’ll take it apart and do all the really good stuff with the engine. We changed the piston, the valves, everything. When that got all straightened out and cleaned up and everything, we had went through tons of experimental work. It was a science project stretched over years, with often 6 months at a time of it being laid up in the shop.

DK: How much did you put into the cars?

JC: The 328 I bought for, like, $70,000. The original work for getting the turbo on it was like $15,000, but I had to have that engine rebuilt once which was like $30,000. All the trips into shop adds up to at least probably another $20,000. The Testarossa, when it originally went in and had engine work done, was like $30,000, but each engine rebuild was at least $30,000, and it’s been rebuilt three times. The Testarossa, which I originally bought for $90,000, I probably put $150,000 into it. We could have stopped at lesser points, but a lot of it was just like okay, we’ve got this new brand of turbos coming out, so we want to change this…and those weren’t like forced moves, it was more like okay, let’s see what the next step is. Go ahead and try it.

The other cars, the F40s that I had, I bought for $250,000, and I didn’t make any modifications to them. Then eventually when I got the F50, that was $670,000. The original turbo charging work wasn’t that much, maybe $30,000 or $40,000 dollars but it’s still getting some teething problems worked out. We’ve been taking that one conservatively because I really don’t want to have that engine rebuilt. I suspect that rebuilding that engine is going to be more like $60,000 just because there are not that many of them and ordering the different things would just be troublesome.

Creating Fact-Based Energy Policy

Confusion is a major obstruction to developing clean energy policy. “Fact” and technical opinion come from a wide range of biased sources with no independent validation. As a result, our political leaders are forced to make decisions in an environment of enthusiasm, bias and hype.

As noted in my April 22 blog, one way to minimize confusion is through strategic scenarios; create the future target then a plan to get there. Today’s blog shows how disciplined systems engineering provides trustworthy facts. We cannot eliminate ideological differences. To create sound policy, value differences still need to be resolved in the political arena. But classical systems engineering provides the factual basis and management processes to push back against bias, enthusiasm, and hype, enabling policy-makers to make informed value choices.

The purpose of an engineering development plan is to achieve a goal based on what we know today. Generally there are four sequential phases: technology development, engineering development, full scale development, and production.

Design reviews and decision milestones separate development phases. Design reviews serve as critical evaluations of fact that provide the client with the information necessary to make sound choices. In a classic design review, program engineers present their technical results to the client and their expert consultants. The consultants critique the presentation in front of the client. Program engineers defend the numbers. By observing the give and take between protagonists and antagonists, the client is able to grasp the nuances and make sound judgments about esoteric problems he does not really understand.

For clean energy systems the client role is multi faceted (American people, Congress, president, program administrators) and needs to be represented by surrogate teams. We need to experiment with open design reviews that can be accessed by the general public. While major design reviews occur between development phases, minor design reviews should occur after demonstration projects to understand lessons learned. Clarifying a factual basis simplifies decision-making by minimizing bias and hype.

Today, there is no plan, no set of phases, and no discipline. We are transitioning wind from demonstration to production with no design reviews. No one has a clue about system level costs or by how much carbon dioxide emissions will actually be reduced. There are no published after-action reports from failed demonstration projects. No one has shown how wind would make any contribution at all to a low carbon grid (see Wind Energy Contribution to a Low Carbon Grid). Grid operators seem to be afraid to say something negative and thereby spook their political masters.

As an engineer, I find this to be embarrassing and irresponsible. We can do better. Managers of society’s great engineering success stories had a firm grasp on both planning and budgets. Today, people who have never built a system before are advising the president that “we don’t need a systems integrator; the markets will do it.” At the very least, President Obama needs to designate a systems integrator with the authority to enforce good engineering processes. It is imperative that we find a way to conduct effective open design reviews.

Alex Pavlak is a PhD Professional Engineer with experience in systems architecture and the economics of wind power systems. He has had various management responsibilities in the development of major military systems. He has spent 15 years in alternate energy and holds several patents pending on wind turbines and static solar concentrators. apavlak@comcast.net

Collection of Private Wi-Fi Data Just Google's Latest German Gaffe

Germany hasn’t been an easy market for Google. In fact, maybe only China has proven to be a bigger challenge for the owner of the world’s largest search engine.

Google’s latest snag in Germany: collecting private data from unsecured Wi-Fi networks.

As in other markets, the company uses a fleet of vehicles to take pictures for its Street View photo archive. The vehicles take snapshots of street views, which are later integrated into Google Maps.

The data protection supervisor for the city-state of Hamburg, Johannes Caspar, accused Google of sniffing some network data, like the Service Set Identifier (SSID) and the Media Access Control (MAC) addresses, to help the company keep a better fix on locations for its Web products.

Google first denied that it was sniffing private data from unsecured Wi-Fi networks. Then, oops, the company realized it was – ouch.

In a blog, the company said, bluntly, that it had made a mistake: “In 2006 an engineer working on an experimental Wi-Fi project wrote a piece of code that sampled all categories of publicly broadcast Wi-Fi data. A year later, when our mobile team started a project to collect basic Wi-Fi network data like SSID information and MAC addresses using Google’s Street View cars, they included that code in their software—although the project leaders did not want, and had no intention of using, payload data.”

OK, big companies make mistakes. But Germany isn’t a market where Google can afford to make many, especially in the area of data privacy. Modern Germany is tough on data privacy – and technology that can undermine it – ever since World War II when, among other things, punch-card data processing systems allowed Nazis to categorize and track concentration camp victims.

Many Germans were upset about Google’s plans to introduce Street Views in the first place. Last year, German officials demanded that the company delete tons of retained snapshots, citing that people’s privacy was being violated under German law.

The issue went all the way up to Chancellor Angela Merkel who announced in February that Germany wouldn’t hamper the release of Street View in Germany. In a podcast later that month, she anyone in the country who considers the service “to be an invasion of their private sphere can make use of the right to object."

The Consumer Affairs Ministry has drawn up a template letter - available for download from its website - for this purpose. The internet giant has agreed to blur license plates and faces of individuals who do not want to have their photos appear online.

Street View, which offers panoramas of thousands of streets on the Internet, already covers large areas of the U.S. and the U.K. It seamlessly stitches together photos taken by camera cars, which capture 360-degree images. Google has been gathering images of streets and public spaces in Germany since 2008.

Sort of related but not entirely, a German court surprised – and excited – numerous data protection advocates in the country with a decision on wireless connections. A few days before the Street View sniffing incident gained public attention, the court ruled that private users are obligated to ensure that their wireless connections are adequately secured against the danger of unauthorized third-parties abusing it.

But long before Street View, another hot issue in Germany was Gmail. The burning point wasn’t the service itself – Germans are big fans – but rather its name. In more than 60 countries around the world, Google calls its email service Gmail: Germany isn’t one of them.

A German court ruled in 2007 and, after some tough fights, again in 2008 that entrepreneur Daniel Giersch holds the "Gmail" trademark in the country. The decision ended Google's long legal battle for the name. Google launched its Gmail service around 2004; Giersch has been using the Gmail name in Germany since 2000.

How long Giersch holds out to Google is anyone’s guess. This month, the Independent International Investment Research (IIR Group), owner of the Gmail name in the U.K., agreed to sell the name and settle a long-running trademark dispute. IIR Group initially sought $500,000 per year, in addition to a slice of the advertising pie, in exchange for standing aside. Both companies are mum on the terms of the settlement. Probably a good idea, considering that Giersch and whoever else in the world owns the Gmail trademark have to be interested in Google’s net worth.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More