Tech Talk iconTech Talk

California Sets Energy Standards for Televisions

Today, the California Energy Commission set energy efficiency standards for televisions, due to go into effect in 2011. According to Pacific Gas & Electric, the standards, which affect TVs with screen sizes 58-inches or smaller, will cut carbon dioxide emissions by three million metric tons over the next ten years and save $8.1 billion in energy costs. Under the standards, a 42-inch television would be limited to consuming 183 watts at most by 2011 and 115 watts by 2013; current Energy Star guidelines, which are voluntary, set 208 watts as a standard for 42-inch TVs. The California regulations will be mandatory; televisions that don’t meet them will not be allowed to be sold in the state. (Some 1000 sets on the market already meet the 2011 standards.)

The Consumer Electronics Association, an organization that represents the manufacturers of consumer electronics products, is vociferously protesting the regulations. The CEA managed to get the commission to delay its decision, originally expected on 4 November, by filing a 91-page brief just moments before the comment period closed. At a conference call with the press today, CEA representatives indicated that they still felt shortchanged by the process, because no changes were made in the proposed regulations as a result of their comments.

While the Energy Commission sees their regulation as saving consumers money in the form of energy costs, as well as having positive effects on the environment, the CEA sees the regulations as all bad.

Take TV prices, for example. The Energy Commission says that, according to their analysis, the regulations will not affect the prices of televisions. The CEA counters that regulation will drive up the purchase price, either because of direct costs of energy saving technology (which the CEA estimates at tens to hundreds of dollars) or because the regulations will force low-tech power-guzzling models off the market. Because these tend to be cheaper units from no-name manufacturers, competitors will then be able to raise prices.

The CEA also argues that besides pushing low-tech TVs out of the market, the regulations will prevent advanced-tech TVs from getting in.
 

 

Seth Greenstein, counsel to the CEA, says that whenever a new technology gets introduced, it needs time to be gradually optimized for performance. “If we were still in the world of CRT TVs or DLPs, and you wanted to introduce plasma to marketplace, you couldn’t,” he says.

Greenstein points out that OLED displays are still in development and 3-D televisions are expected to roll out in large numbers this year. The industry relies on sales to early-adopters to fund continued develoment of these technologies—but won't be able to if new technologies can't get to market.

“The real risk of the California Energy Commission's regulation," says Jason Oxman, the CEA’s Senior Vice President for Industry Affairs, is that there will be innovative technologies "that California consumers won’t be able to enjoy because of CEC regulations, that consumers in 49 other states enjoy.”

Realistically, it’s hard to imagine TV manufacturers ignoring the huge California market; it’s more likely that California regulations will influence the world. That's a thought that makes the CEA shudder.

Says Doug Johnson, the CEA’s Senior Director of Technology Policy, “We don’t want to take a bad idea and make it a national bad idea.”

The CEA hasn’t revealed its next move—and won’t until it has time to review the complete regulatory documents—but has indicated that it will make every effort to make sure that, in Oxman's words, “common sense is restored” in California.

Photo credit

Intel & AMD Both Claim Bragging Rights

Intel and AMD reached an agreement earlier this week aimed at ending a messy (read: costly) legal dispute over what AMD can do with a group of technologies the two chipmakers have cross licensed for most of this decade, and what Intel can't do to convince computer companies that it should be their exclusive chip supplier. But they still remain fiercely competitive. For AMD, the cherry on top of knowing that its bitter rival will soon fork over $1.25 billion as a penance for its anticompetitive behavior is finding out that the world’s fastest supercomputer runs on its chips.

The Cray XT5 “Jaguar,” which is at the U.S. Department of Energy’s Oak Ridge Leadership Computing Facility in Tennessee, tops the latest semi-annual list of the world’s 500 most powerful supercomputers. The system, which runs on AMD’s Opteron chips, is capable of doing 1.75 quadrillion floating point operations per second. Jaguar had been the runner up in June and last November. It now swaps places with IBM’s “Roadrunner” computer located at the Los Alamos National Laboratory in New Mexico. Roadrunner, which had held the title since the June 2008 ranking, was the first computer to break the petaflop barrier.

Still, Intel has no reason to hang its head over the news. The rest of the list paints a picture of its dominance. Of the remaining 498 slots in the rankings, computers with Intel inside occupy 402. AMD has the notoriety of having helped create the heavyweight champ. But to extend the boxing analogy further, consider this: if there were 10 weight divisions in the supercomputer game, Intel would hold eight of the title belts.

I guess both companies have reason to crow. And both have. Dueling press releases about the firms’ peta- and gigaflop achievements reached e-mail inboxes seemingly a nanosecond after the German and U.S. computer scientists who compile the list made the Top 500 announcement.

null

NASA finds water on the Moon, raising hopes for a Moon base

NASA is reporting that one of its spacecraft has found significant traces of water on the Moon.

The rocket from NASA’s Lunar Crater Observation and Sensing Satellite (LCROSS) slammed into a crater on the Moon's surface on October 9, creating a crater and uncovering soil below the lunar surface.

“We’re unlocking the mysteries of our nearest neighbor and, by extension, the solar system,” said Michael Wargo, chief lunar scientist at NASA Headquarters in Washington.

If the Moon becomes a waystation in future voyages to the planets, space travelers could possibly stock up on water at a base on the Moon.

Even if the Moon is not a waystation, the cost to maintain a permanent crewed base on the Moon seems to have just gone down. Water is relatively heavy to transport by spacecraft, and a big cost of maintaining such a base, it was previously thought, would be supplying its inhabitants with water. If there are local sources, a big impediment may have been removed. (It still remains to be seen how feasible it will be to gather this water.)

Some astronomers say that the finding of water in the lunar soil may be in keeping with the currently popular model of the Moon's formation, which holds that the Moon formed from the Earth's crust because of a giant impact early in the history of the Solar System. Some of the water common in the Earth's crust would have remained inside the moon's surface and in deep craters which do not get sunlight. Others think that the water came from comets which crash into the Moon periodically.

Whatever the source, on the Moon, sunlight breaks down water into its constituent hydrogen and oxygen, which escape into space because of the Moon's weak gravity. Any water on the surface that is exposed to sunlight has escaped this way, thus the surface seems waterless. But many astronomers have believed that there is water below the surface an in craters that don't get sunlight because of the Moon's tilt.

LCROSS slammed into one such crater near the Moon's south pole. NASA said LCROSS detected about 24 gallons of water in the lunar soil that was uncovered in the impact. That's a lot of water.

You can read more about the LCROSS mission here: lcross.arc.nasa.gov/

NASA held a press conference to announce the finding.

 

 

null

A Pro-Antitrust Administration?

In 2001 the new Bush Administration snatched victory from the hands of U.S. Justice Department officials when it settled a three-year antitrust battle with Microsoft after the department had already won the case. The Justice Department’s antitrust division went into near-dormancy after that, but today, eight years later, antitrust is back in the headlines.

A month after the Justice Department began investigating antitrust claims against IBM and Sony disclosed it was under antitrust investigation for possible price-fixing, New York State attorney general Andrew Cuomo filed a major antitrust suit against Intel.

The Intel lawsuit comes hard on the heels of the European Commission’s record-setting fine of €1.06 billion (US $1.45 billion), imposed on Intel for antitrust practices, specifically for giving “loyalty” rebates or discounts to customers who agreed to use fewer (or none) of competitor AMD’s products. And despite a joint announcement yesterday that Intel and AMD are settling their differences and moving forward peacefully, the outstanding government lawsuits against Intel aren’t likely to be going away. The settlement “doesn’t change the evidence at all,” says John Peirce, an antitrust lawyer and partner at Bryan Cave.

Bloggers point out that New York’s Cuomo is using much of the same evidence as the European Commission, and more, with the U.S. Federal Trade Commission’s investigative aid.

But is the European example a good predictor for how the suit will go down in the U.S.?

In 2004, the EU's European Commission fined Microsoft US $613 million for violating EU antitrust laws, a decision the US DOJ criticized, rather than following suit. Of course, that was in the middle of the Bush Administration’s anti-antitrust fervor (or is it antitrust antifervor?).

More to the point, as uslaw.com pointed out at the time of the EU’s Intel case, there are some key differences between the EU and U.S. antitrust systems. While then-U.S. Justice Department antitrust attorney Tom Barnett took some heat for criticizing the EU Microsoft decision in 2004, his pronouncements at least

had the benefit of informing U.S. companies that they would not adopt the European approach, and that U.S. law was importantly different because it required a more rigorous form of economic analysis and more substantial evidence of consumer harm rather than speculative possiblity [sic] theorems coupled with harm to competitors.

Though Barnett was speaking from the Bush administration perspective, which was fairly lax on antitrust, one point is still worth noting: Obama’s Justice Department will have to make a case that consumers were substantially harmed by Intel’s actions during a time when PC and laptop prices plummeted.

So what about now? New York attorney David Mazur suggests that the European case against Intel still might not pave the way:

Individual judges and regulators have widely varying opinions as to the appropriateness of using foreign court decisions as precedent (or even as a definitive statement of the facts), and it’s very possible that Intel was implementing different strategies in dealing with European OEMs.

And Bryan Cave's Peirce points out that while the EU regulators expressed “a good deal of skepticism” about Oracle’s takeover of Sun Microsystems in April, the Obama Justice Department decided it wasn’t a problem, and the merger took place. “If they were going to be hawkish,” Peirce says, “I’d expect to see them aligned with Brussels. And they’re not.”

So while the Obama administration has indicated a more aggressive stance toward business practices of companies with huge market share, Peirce thinks “it’s too early to say” which way they’ll swing on antitrust. “They haven’t done anything yet,” says Peirce. And “a government investigation doesn’t mean that anybody has done anything wrong.”

Still, the road won’t be easy for Intel, and the NY AG’s suit is a major hurdle. According to the Huffington Post:

Technology analyst Rob Enderle said Intel may be facing a harder fight this time. He said the company worked through its problems "elegantly" in the 1990s, but has become "much more combative" in the past decade in its dealings with regulators. "This could represent one of the biggest dangers that Intel has ever faced," Enderle said.

If and when the administration decides to take concrete action on antitrust cases, Mazur concludes,

The Obama administration will not only have leeway to determine which companies are investigated, but will also influence how to deal with adjudicated monopolists. For example, though the Clinton administration had been pursuing a structural remedy in the Microsoft case (breaking the company up), the Bush administration took this option off the table, opting instead to pursue behavioral remedies (drawing up an agreement that limits the way the company can act).

Some business executives aren’t waiting to get nervous. In an interview this week with CNET, Google CEO Eric Schmidt acknowledged what CNET called the company’s “new role as the No. 1 target for U.S. antitrust regulators.”

Is Deep Brain Stimulation a Cure-all?

Is there anything that can't be fixed by burrowing an electrode array into the deep tissues of the brain? With varying degrees of success, deep brain stimulators have been used to temporarily defog clouds of chronic depressionstamp out migraines before they cycle out of control, and steady the movement of people with Parkinson's disease. Well, now you can add Tourette syndrome to this list of maladies.

A study in the current issue of Neurology follows 18 patients with severe Tourette syndrome for two years after having neurosurgery. During the procedure, an array of electrodes is inserted into the patient's thalamus. Wires running from the device connect to a pulse generator implanted just beneath the skin on the chest. All the parts are internal and, when activated, stimulate a highly targeted area of the brain.

Those who continued with the treatment (3 either dropped out or were removed by the researchers) showed a significant decrease in the motor and verbal tics associated with the disorder. In all but one patient, obsessive behaviors and symptoms of depression improved, both of which are commonly present in patients with severe Tourette syndrome.

The authors are refraining from drawing any conclusions until more controlled experiments have been conducted and they acknowledge the risks inherent in an invasive therapy like DBS.

But even the preliminary results beg the question: how can one treatment option yeild a good outcome for so many different disorders? And, overwhelmingly, the answer is, "we don't know." What we do know is which parts of the brain are best targeted for different disorders, suggesting that we know much more about the "where" than the "why" with DBS.

Andrea Cavanna, a lead author on the study, explains that "little is known about the patho-physiology of Tourette syndrome. However, dysfunction in the fronto-basal pathways seems to play a relevant role in tic generation and associated behavioral problems. The targeted thalamic nuclei are the final common outputs of the involved pathways."

So, they try it, and it seems to work. But I can't help thinking that researchers will be able to perfect therapies like deep brain stimulators only once we have fully explained the disorders they seek to treat. And that this, rather than the technology, is what holds us back.

Netbooks Are Only Part of The Solution

Netbooks are going to be huge, much bigger than they already are. Trust me on this. I say this not because I see more and more people working on them in cafes instead of on standard laptops—though I do. It’s not because I particularly want one—though for short trips I can see the appeal. It’s not because on a recent multifamily vacation one family showed up with one netbook per child.

It’s because my 70-something aunt, the one with the 30-year-old radio that you can only turn off by pulling the plug, and the TV that gets its signal from a 50-plus-year-old two-wire cable, just told me she’s thinking of getting a netbook.

Oh, it’ll be a couple of years before she actually makes the purchase, but the fact that she’s evening considering it is huge. The appeal for her is the cost, for sure—if it turns out to be a mistake, it won’t be a huge mistake. But what also is drawing her is also the fact that netbooks don’t look all that high tech. They don’t take up much room, they don’t have a lot of extra buttons on the keyboard, and they don’t do vast numbers of things she wouldn’t want to do anyway—like edit video or spend hours typing long documents.

But she has been thinking that it would be pretty cool to look up a fact she read somewhere but just can’t remember exactly, or check out a new medication prescribed by her doctor before she orders it. And that’s enough usefulness to make her part with $250 or so. Once she gets one, I’ll show her how she can keep up with all her grandnieces and nephews on Facebook, and she’ll be set.

Unfortunately, much as I would have liked to, I didn’t run out that moment and get her a new netbook. Because there’s one piece of this puzzle missing—some kind of community wi-fi access. It doesn’t have to be free, it doesn’t have to be fast, but it has to be there; easy to get to at a reasonable price.

Forget dial-up—netbooks don’t even come with built-in modems, and these days the bells and whistle of most web sites mean dial up is just too slow to be viable. Cable modem or DSL would mean new wiring in her home (she’s got one corded wall phone right now, no other jacks), and a box that would have to be installed somewhere, set up, and occasionally rebooted. I can’t see convincing her to go through that hassle and expense.

But community wi-fi would be perfect. She’d need nothing but the netbook, the monthly fee would be reasonable, and, while likely slower than cable or DSL, it’d be moving plenty fast for her needs.

Which got me wondering—what happened to community wi-fi, anyway? I called Sascha Meinrath, research director of the New America Foundation’s wireless future program. He told me that it’s been going great in Europe, but in 2004 or 2005 got sidetracked in the U.S. “The rationale of community wireless, bringing low-cost or free wireless to the masses, got usurped by the corporate model,” he says, with companies trying to figure out “how do we charge money for it.” And the corporations that cities contracted with to build low-cost systems didn’t have a lot of incentive to make those systems succeed, since they’d be competing with their own, higher cost Internet access offerings. Earthlink, for example, last year shut down its community wireless systems in Philadelphia and New Orleans.

The good news, Meinrath told me, is that community wireless in the U.S. may be starting a new surge. He sees encouraging signs in the efforts of Meraki, a Google-backed startup that’s building low-cost wireless networks for companies, universities, and communities, and other low-cost efforts. He’s starting to see municipal and community groups who looked at community wireless in the past but got put off by the apparently high costs getting ready to take another look at it. And, he says, the $7.2 billion in stimulus funds targeted at increasing broadband access can only help; he’s hoping communities will spend that money on low-cost open source systems instead of expensive proprietary systems to make it go as far as possible.

Now back to my aunt. She still wants that netbook—with Internet access, but without a box in her house. Community wi-fi may be coming, but not soon enough. So I’m thinking, next time I’m visiting I’m going to boot up my laptop and see if I’m picking up any signals; if I am, I’ll go knock on a few doors and see if I can borrow a cup of broadband.

Making Talking On A Cell Phone While Driving Just A Little Bit Safer

Even though we all know that talking on a cell phone while driving is not the safest activity in the world, we’re not giving it up. We are, of course, going hands free—in many states, that’s the law.

But short of turning off our phones all together, there’s not much else we can do. Or is there? Eyal Ophir and his colleagues in Stanford University’s CHIMe Lab (that stands for Communications Between Humans and Interactive Media) say there is. They’re currently writing up the results of their latest study on multitasking while driving.

This is the same group Stanford researchers who, in August, dropped a dime on multitaskers everywhere by proving that folks who think they are getting more done by juggling multiple tasks instead of focusing on just one are kidding themselves, and generated a media buzz.

This time, they looked at their favorite research subjects—multitaskers of two breeds, low and high. In general, they found, high multitaskers are easily distractible—that could be a good thing when they’re having a heated cell phone conversation yet catch a glimpse of something on the road ahead that could cause trouble. Low multitaskers are less distractible, which could mean that they’re more likely to focus on their cell phone conversation than the road.

The trick was to build some kind of tool that can help both types of multitaskers pay attention to their driving when it’s most critical.

I ran into Ophir at the unveiling of a new autonomous vehicle—perhaps the ultimate solution to the driver distraction problem, but one that won’t be parked in the average driveway anytime soon. He told me about the tool he came up with.

The concept—moving the voice coming from the cell phone around in the car. Not hard to do, with today’s virtual surround sound technology. Ophir designed a system that puts the voice up at the driver’s level when road conditions are relatively safe, then drops it down to the driver’s feet when conditions are more hazardous. He says he could have done it the opposite way and it appears that it would have worked equally well, but that research has shown that voices coming from lower than the speaker are less dominant, hence his choice of high and low. He tested the system with drivers in a simulator, and found that drivers quickly learned that a change in position of the voice meant, “Pay attention to the road!”  They later rated the cell phone conversation as less distracting when the sound was coming from their feet.

In the real world, Ophir sees this system linked to the driver’s GPS and a database of accidents, to identify potentially treacherous areas of road. Ophir expects to publish this research soon.

Stanford and Volkswagen Unveil An Autonomous Race Car

Yesterday, on the Stanford University campus, researchers unveiled Stanford’s third generation autonomous vehicle, a modified Audi TTS. Unlike Stanley, the university’s first generation self-driving car, and Junior, the second generation autonomous vehicle, this latest model won’t be competing against entries from other teams for millions of dollars in prize money, for the era of challenges sponsored by the Defense Advanced Research Projects Agency (DARPA) is over. Too bad for the Stanford researchers, because Stanford did well in the past, placing first in 2005 with Stanley and second in 2007 with Junior, both developed in cooperation with Volkswagen of America.

“The challenges were exciting,” said David Stavens, a PhD student in computer science who co-designed Stanley. “They inspired us to come together and advance the field.” And he will miss them. But on the plus side, without specific criteria and deadlines to meet, researchers now, Stavens said, “have a moment to step back and look at the whole problem,” which may lead to new insights. And, for Stavens personally, he’ll finally get to focus on writing his Ph.D. thesis and graduating.

Stavens did hint that some of the former competitors may be coming together to figure out a way to fill the vacuum created by Darpa stepping out of the autonomous vehicle challenge business, but couldn’t be convinced to say more.

Stanford’s latest autonomous vehicle, pictured above and right, is designed for the racetrack, not city streets. It zips along at the highest speeds it can manage and still stay in control. The sensors and algorithms built to do this will, researchers hope, eventually enable ordinary cars to operate more safely by knowing their limits and forcing drivers to stay within them.

The car will go through its paces live tomorrow as part of the dedication of the new Vehicle Automotive Innovation Laboratory at Stanford’s School of Engineering. Volkswagen Group of America donated $5.75 million to fund the lab, including $2 million for the building and $750,000 a year for five years to fund research and teaching.

Clean Tech Open Announces 2009 California Finalists

Today in Palo Alto, Calif., the 2009 Clean Tech Open honored six teams of entrepreneurs as California Regional Finalists. The competition began in 2005 as the California Clean Tech Open. This is the first year the competition extended outside the state; the California winners will go on to compete with finalists from other regions.

Still, being named as a California finalist—out of a field of 158 entries—is huge, and the folks picked today knew it. Four competitors sitting in front of me were literally holding their breaths waiting for the award in their category to be announced—and simultaneously exhaled when they heard their company’s name. It’s such a big deal because California finalists get $100,000 in cash and services to help them build their businesses, and because being named a finalist is validation of their ideas that brings them one step closer to getting the venture investment they need to make their dreams come true.

The California Regional Competition sends six teams on to the finals, one in each of the competition categories—Air, Water, and Waste; Energy Efficiency; Green Building; Renewables; Smart Power; and Transportation. The Pacific Northwest and Rocky Mountain competitions will each send three finalists on. A National Grand Champion will receive a $250,000 prize package.

While all six finalists announced seem to have solid business potential, two in particular generated a distinct buzz in the room when founders stepped up to describe their ideas—not chatter, exactly, but that feeling of the folks in the audience leaning forward just a bit to make sure they’re catching every word, the rustle of people rummaging for a pens to take notes.

Based on this buzz factor, I’d say the California startup with the biggest chance at the Grand Prize is Micromidas. The team, mostly recent graduates from the University of California at Davis, has developed a technology that turns raw sewage into biodegradable plastic. They intend to work with wastewater treatment companies to build the processing plants to produce the plastic. The plastic would be sold as packaging material for retail goods; consumers would the dispose of the packaging in home or municipal composting facilities. The price of the new plastic would be competitive with petroleum-based plastics. Micromidas is looking to raise $1 million in funding and expects have its first pilot plant up and running a year from obtaining that funding.

Also a possible Grand Prize contender, based on the audience buzz: Armageddon Energy. This solar-power-kit-in-a-box company actually grew out of the Clean Tech Open itself; its founders met as volunteers behind the scenes, where they helped set up chairs and click through slides, and then started talking about what it would take to enter the competition themselves. The team has designed a 1-kilowatt rooftop solar system designed to be sold in a flat-pack box and through big box retailers like Home Depot and Costco. While consumers could do some of the assembly and installation themselves, the final step—connection to the electric grid—would require an electrician to complete, although that electrician would not need any special solar installer certification. Along with the system, the company would sell dashboards and web services for consumers interested in closely tracking system performance.

The other California finalists were Alphabet Energy, tru2earth, Ecofactor, and Fuelsaver. National winners will be announced at an event in San Francisco on 17 November.

Read Spectrum’s coverage of past winners:
Crowning the Clean Tech Stars
California Clean Tech Open Wraps Up for 2007
Build a Kite Big Enough for a Ship and You’ll Save Fuel
Winner of Clean Tech Open Scores Again

 

Nobel Controversy: Willard Boyle denies Gordon's claims, proud of CCD work

Editor's Note: This is part of our ongoing coverage of the 2009 Nobel Prize in Physics. Read more about the Nobel Prize winners themselves, the Bell Labs engineer who patented the CCD imager, and the illustrious history of Bell Labs.

 

Last Monday, I asked Willard Boyle, who will share the 2009 Nobel Prize in physics with former colleague George Smith, a few uncomfortable questions:

Should Eugene Gordon have been on the patent for the CCD?
“I don’t think so. I don’t see it all. He didn’t have any entries in a notebook... I’ve no recollection of his having been around.”

Should Mike Tompsett be recognized for making the first CCD camera?
“Fair enough. He’s one of the three that made the first model. No problem there.”

What about those who complain that there is no physics to the invention of the CCD—that it’s purely a work of engineering?
“They should complain to the Nobel Committee. We’ve already had a nice engineering award for it.”

In a conversation with me it didn’t seem that Gordon’s claims had dented Boyle’s happiness in the least. I asked him what the week of the announcements was like:

“It’s been extremely busy,” says the retired Haligonian. He described the 05:00 wake up call from Stockholm as “a shocking moment.” His wife answered the phone: “Stockholm is calling.”

Since the call he’s been in contact with fellow winner George Smith: “We’re both happy. You get a nice comfortable feeling,” he says. (Several hundred thousand euros would indeed be comforting, in my opinion.)

Recalling the day he and Smith worked on the CCD concept he says that when he came home that night he told his wife: “George and I did something special today.”

Years later, he and Smith received a series of letters from major observatories around the world, thanking them for their work. “It made you feel good,” he says. He’s looking forward to going to Stockholm to accept the award.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More