Tech Talk iconTech Talk

Hiding from the Internet

These days, engineers are working hard to make our network connections faster and more reliable. In this issue, David Schneider reports that communications satellite company Iridium is upping its maximum data rate fourfold, and Clark T-C. Nguyen suggests that mechanical components may be the way to make future wireless devices better. Information is indeed coming at us faster and in more places. But that’s not all good. In October, author Nathan Zeldes talked about the importance of getting control of this flood that threatens productivity and creativity.

For me, the only way to do get such control is to cut off my network connections, all of them, for three weeks every year.

I’ve taken this annual three-week Internet sabbatical for about as long as I can remember. (I didn’t used to call it that; I used to call it vacation.) It acts as a reset button. It reminds me that looking out into the distance instead of at the computer in front of me makes a real difference in how my eyes feel; that getting my hands off the keyboard and moving around regularly is good for my body; I sit differently, stand taller. It reminds me that having time to let my mind drift, instead of being yanked from demand to demand, can lead me to new ideas.

And when I go back to the keyboard, the computer screen, the Internet, I handle it better. I force myself not to check email quite so compulsively, to take the occasional stretch break. For a while, anyway.

This year, my Internet sabbatical took me to a lake in New England and a beach in New Jersey. We’ve gone to both places for years; I can remember when the New Jersey rental didn’t have television—too remote for broadcast TV. I liked it back when we didn’t own cell phones; the rental didn’t have a landline installed, so I told potential visitors not to bother to call ahead, just come on down and look for me on the beach. Now both rentals had hundreds of television channels, thanks to satellite TV, and my cell phone means I’m never out of touch, even when I’d like to be.

But I thought I could still escape from the Internet. I just didn’t know how hard it would be to hide.
In New England this year, we rented a cabin with nine other family members and friends. I didn’t bring any devices that could get online, but among the 14 of us staying there (six of them children), we’d packed in two full-size laptops, four netbooks, and an iPhone. Good luck with that, I thought, not expecting any access beyond dial-up; hope you downloaded plenty of movies ahead of time.

It turned out I was the one surprised—the cabin had both WiFi and wired Ethernet. So folks were on their computers constantly—watching silly YouTube videos, checking email, looking at weather reports and sports scores, googling this and that and everything. Instead of the lapping of the water on the little beach, I heard the beeps of games and music from the latest viral video. And, even worse, they kept calling me to come over to the screen, as if my life would not be complete were I not to see a certain clip of extreme sailing or last night’s Daily Show.

I resolved that I would not touch a keyboard for the entire vacation; wouldn’t look at a screen if I could possibly avoid it. I was worried that once I touched a computer I’d quickly surf over to my email and would get sucked into responding and bang, I’d never be able to break away again.

It got a little tense one afternoon when my husband’s computer wouldn’t log on to the wireless. Since I’m tech support in the family, he asked me to try to fix it. “I’m not touching the computer,” I said, “but I think the thing to do would be to go to the Apple menu, open system preferences, look at your Internet connection settings, and switch them to manual and then back to automatic, which will force it to reacquire the DHCP or something like that. Anyway, it should fix it.” (I could feel my little brain cells click into non-vacation mode, and I wasn’t liking it.)
He held the computer out to me. “It’d only take you a minute to fix it.” I kept my arms at my sides, and a cousin jumped in to help him. (Thank you, Jeff.)

I figured this torture would only last a week, because their was no way our landlord in New Jersey was going to have hooked our shabby but much-loved beach rental up to the Internet. I was right about that, what I didn’t count on was new community wifi—slow, but it worked in most of the rooms of the house. I was not pleased. I like listening to the ocean from the screened in porch, and find the street noise and occasional group house party sounds only occasionally irritating—but having someone out there tap tapping on the keyboard was sure to drive me nuts. (And I couldn’t keep them off the porch; the signal was strongest there.)

The temptation to get on line grew. My oldest son was not traveling with us, instead, he was with a school group performing at the Edinburgh Fringe Festival. And he hadn’t been calling. However, other folks on the trip were posting pictures online, so we could Internet stalk him to see if he was OK. I caved and looked at the photos twice while my husband paged through them—I did stick to my resolution not touch the keyboard. I realized afterwards that I would have been fine waiting until I got home, the photos didn’t tell me anything other than he had indeed made it to Scotland.

Like TV did, and phone did, the Internet is invading my vacation space. And pretty soon there’s going to be no place to hide.

Photo: John Lund/Paula Zacharias/Getty Images

Screen, Paper, and Information Overload

The following is a guest post by Nathan Zeldes, author of IEEE Spectrum's "How to Beat Information Overload

After I published my Spectrum article on Infoglut, I received many responses through a variety of online channels. That's one of the perks in being an active cyber-citizen: you get to meet interesting people. However, a number of respondents focused on the same complaint: the article, they claimed, was far too lengthy, thereby itself contributing to Information Overload.

This statement puzzled me, because I wrote the article in close cooperation with Spectrum’s capable editors, and in conformance with their expectations. My curiosity piqued, I pulled out some back copies of the magazine from my shelf and counted the pages. Turns out that my article is 3.5 printed pages long, well within the norm for Spectrum feature articles. In fact I recently blogged about the shortening of articles in printed magazines over the decades, using Scientific American as an example; that magazine’s features went down from 12 content pages per article in the sixties to 8 pages in 2009. My 3.5 pager would count as positively brief in Sci Am even today... So what is going on?

What made it even weirder is that looking at the article in the printed magazine, it did not appear particularly long; but even I had to admit that the online version, which, it turns out, is the version the complaining readers saw, does seem to go on and on...

Thinking it over, I realized that there is a key difference between reading online and on paper. The information density on paper is certainly greater: lots more can fit—and be consumed—on a printed page than can be put on a screen of the same physical dimensions. The text of my article fills seven vertical “screenfuls” on my 22-inch monitor; it fits on 2.5 double-page spreads of Spectrum, even though each such spread is a bit smaller than the monitor’s area. This means that I could take in the printed article in three “gulps”, versus seven online. Even more important, the printed version is random access; I can move my eyes across the pages and home in on what I want, taking my cues from the layout, section header typography, and images. The long-scrolling online version is serial access: you need to scroll patiently through uniform-looking text blocks to find anything, and your best strategy would be to read it start to finish serially to see what’s in it. That’s why we use books, and not the scrolls the ancients had: flipping through pages is much more efficient if you plan to skim.

This came as a surprise to me: I'm used to considering online reading as a valid alternative to print (until it comes to snuggling up in bed with a good novel, at any rate). Online has its advantages—searchability being a major one—but in an age of impatience, Gutenberg’s good ol’ system has its benefits as well!

Nathan Zeldes blogs on Information Overload here.
 

NORAD Ready To Track Santa Claus

The North American Aerospace Defense Command (NORAD) announced today it is ready to track Santa Claus's coordinates as soon as the red-helmeted old man takes off from the North Pole. No, don't worry, Santa will not be riding an ICBM-powered sleigh. This is just an annual public relations program by NORAD.

On 24 December, you (or your kids) can go to the NORAD Tracks Santa web site to see Santa's location in real time. You can also call 1-877-HI-NORAD (1-877-446-6723), send an email to noradtrackssanta@gmail.com, or check Facebook, Twitter, Flickr, YouTube, Google Maps, and Google Earth (look for @noradsanta).

If you think this is all very silly, it's probably because you don't have a pre-teen bombarding (no pun intended) you with questions about Saint Nick. If you did, you'd thank NORAD for this initiative, which is actually really cute (check out their FAQ describing Santa's sleigh technical specs and whether fighter jets might intercept him) and began with an even cuter little tale. In 1955, a Sears store in Colorado Springs ran newspaper ads telling children to call Santa on a special phone number. But the number had a typo. When the first kid phoned in on Christmas Eve, on the other end of the line was Colonel Harry Shoup at the Continental Air Defense Command (CONAD), NORAD's predecessor, in Colorado Springs.

After more calls came in, Colonel Shoup realized one should tread lightly when it comes to children and their patron saint. So he dutifully ordered his operators to find Santa's exact location and reported it back to every child who called that night. Ah, the Christmas spirit!

Below, NORAD's video of Santa's 2008 journey, captured with its Santa cams.

 

null

The Cat Brain Cliff Notes

Two leading scientists are embroiled in a controversy about a cat brain simulation. At first blush, the topic might seem silly. But the stakes are higher than the tired cat fight jokes would lead you to think. This argument has larger implications for the future of AI research, and particularly for a field called computational neuroscience. The controversy has called into question not only the legitimacy of one researcher's work, but of all brain simulation work. I think it's important to untangle the assumptions and accusations in clear, non-specialized language.

Background

Last week at the 2009 Supercomputing conference, IBM researchers on DARPA's SyNAPSE project presented a paper about a simulation that they said approached the scale of a cat's in terms of the number of neurons and synapses involved. They won the Gordon Bell prize for their efforts. A few days later, neuroscientist Henry Markram sent an open letter refuting Modha's claims and calling his paper a hoax.

First, let's understand the stakes and who's playing. Computational neuroscience—or cognitive computing, or biometaphorical computing—is an emerging field that is new enough that its luminaries have not yet agreed on a solid name for it. The goal (very simplified) is to build a brain.

But you can't create a brain out of meat any more than you can make it out of popsicle sticks, so you're stuck with computer simulations. That's not something either neuroscientists or computer scientists can do alone. It takes a village of electrical engineers, materials scientists, neuroscientists, biologists, computer scientists, nanotechnology researchers, and more. All these specialists bring different talents and assumptions. As such, the field is a fairly good embodiment of the Tower of Babel story, or the old story of the blind men feeling the elephant and arguing about whether an elephant is a foot or a trunk.

Markram v Modha

Consider Dharmendra Modha and Henry Markram. One is a neuroscientist, the other is a computer scientist. "The conflict arises since both species are needed to simulate (computer scientist) a brain (neuroscientist)," says Massimiliano Versace, who works on the SyNAPSE project at the Department of Cognitive and Neural Systems at Boston University. "You need both."

Markram is a brilliant neuroscientist. He discovered spike timing dependent plasticity (STDP). For those of you whose heads just exploded, that means essentially he figured out how neurons learn. STDP is what makes you, a human, able to understand causality: you see a flash of lightning, hear a crack of thunder, watch a tree fall on your house, and you are able to say, "Aw, crap. Now I need to call my insurance company." Markram figured out how our neurons fire together and wire together. He's kind of a big deal.

Modha is also kind of a big deal. He figured out a new adaptive replacement cache algorithm. While I have no cute example to explain this, I will say that IBM was so thrilled about Modha's work (and the patent it yielded) that they gave him an office at their Almaden research center and carte blanche. Modha decided that he wanted to build a brain. The next year, Almaden's annual conference was on cognitive computing. DARPA soon came knocking.

But the point is this: "It's safe to say that Modha is as brilliant a computer scientist as Markram is a neuroscientist," Versace says.

SyNAPSE v Blue Brain

Their respective projects are as different as the two researchers. Markram's Blue Brain project at Ecole Polytechnique Federale de Lausanne, is a simulation running on a (slightly older, smaller) IBM Blue Gene. With his model, Markram is investigating the roots of neurological disorders like Alzheimer's disease, autism and depression. Markram says he is building the entire brain from the fundamental ground up, to illuminate how the summed activity of 100 billion neurons can result in consciousness. "Our whole sense of consciousness and memory exists because of the way our brains are wired," he says. He wants to engineer it. He wants a machine that will talk back to him someday, a machine that will be conscious inside the confines of its supercomputer. "That's the secret about Blue Brain," he says. "It will search the Internet by itself; it will search to become more real. Like a Pinocchio."

Modha is working on SyNAPSE, a project that couldn't be more different. With SyNAPSE, DARPA wants to create electronics that take a page out of the brain's book. The stated purpose is to "investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels. Everything you need to know about SyNAPSE is on the Neurdon blog, which is written by Boston University postdoc Versace and Ben Chandler, a PhD student in the department. BU is a subcontractor to HP and HRL, who compete with IBM on the DARPA project (this is the way most DARPA projects are run--three competitors make everyone work harder).

For an explanation of what makes brains so great, I went to Richard Granger (who is, again, kind of a big deal at the Dartmouth Brain Engineering Laboratory). "Brains somehow carry out hugely parallel algorithms, using very slow, sparsely connected, probabilistic elements, and they somehow scale well as they grow," he says. "Big-brained mammals exhibit capabilities far beyond small brains, and there evidently are not severely diminishing returns." Also, they use a very small amount of power.

If we could learn from the brain, what might we achieve? With circuits as reconfigurable, specialized, and fault tolerant as the brain, a new species of computer would be better at recognizing faces and driving a car than the world's most sophisticated supercomputers. Right now, Roadrunner, the supercomputer that comes closest to replicating a human's ability to drive in rush-hour traffic, weighs 227 metric tons and requires a diet of about 3 megawatts. By contrast, the brain regularly handles rush-hour driving on 20 watts (comparable to the power consumption of a Nintendo Wii), and its 1.5 kilograms fit neatly into your skull.

The upshot is this: Both Modha and Markram's projects are modeling the brain, but for different purposes, with different end results in mind, and so they focus on different aspects of brain modeling.

The Trouble With Models

But what both projects have in common is their goal of figuring out what makes the mammalian brain work the way it does. We are a long way off. As of 2009, no one has yet been able to simulate even the simplest, dumbest animal—a fruit fly, say—in the process of reacting to a drink of sugar water. "Fruit flies are actually really complicated," Chandler explains (see BU's CELEST project for more). "As far as I know, we haven't even created a synthetic aplysia or nematode yet."

Why is that? Jim Olds (who directs George Mason University's Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. "We need an Einstein of neuroscience," he says, "to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity." Here's what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?

But it's a chicken and egg problem, because without a brain-specific theory of relativity, there is no way to identify a basic unit of neuroscience. And without a basic unit of neuroscience, you can't build a model. So, to get anywhere, you must abstract. "[Markram] thinks it's the neuron," says Olds. "But what if that's not what makes a brain work as a brain? What if it's the synaptic spine, or the ion channels?" There are even neuroscientists who think you can never model the brain—because in order to do it you'd have to model down to quantum effects.

No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. "We do not have a definition of consciousness," says Granger. "Or, worse, we have fifteen mutually incompatible definitions."

No Six for You

It should be pretty clear at this point that no one's going to be building a Caprica Six any time soon. Or a cat. Or a fruit fly. But that doesn't mean all attempts at modeling any aspect of the brain are useless, or to use Markram's more loaded language, a hoax.

"To understand the brain—or any complex system—it is unavoidable that it be viewed computationally," Granger says. Granger says we should treat brain circuits as, well, circuits. "Circuits on one hand are physical systems; on the other, they are carrying out algorithms. To understand them does entail extensive simulation; but also computational, algorithm-level analysis, i.e., the elucidation of principles from simulation and experiment. Not just "that" they work, but how and why they work." So, can we arrive at deep insights without building a few starting simulations first?

"We definitely want to build simulations, including large, intermediate, and small-scale models, at low, middle, and high levels," says Granger. "The field has indeed been doing so for many years, and much of what we currently understand (such as the examples given) grew from many such models. The field is making great strides, and work such as Henry's, and many others', are prominent examples."

Hoax or No Hoax?

And ultimately, that is where Modha's work is important. It's not that he's created a cat brain. Every journalist who interviewed him has heard him correct them in the following way: "No, no, it's not a cat brain. A cat-SCALE simulation."

Some would ask whether there is a point to having a cat-scale simulation if it does not model with any great fidelity an actual cat. I think you should ask the Gordon Bell prize people*. Because for computer science, this is indeed a big deal. Neurdon's Ben Chandler says, "I don't think [other neural simulation environments] can handle 147,000 processors or 144 terabytes of memory. Actually mapping a system that large, and getting only a .3 percent deviation in workloads across cores is pretty impressive."

"We all want to understand how brain circuits do what they do," Granger says, "in particular, how their computational powers (perception, recognition, memory retrieval) manage to outperform our extant engineering systems. Let's avoid this (vast) discussion for now, and jump to this: To achieve large-scale simulations that exhibit critical characteristics of brain circuits, we will very likely need to address four current roadblock issues: speed, power, scaling, and parallelism."

A teaser for the next post (which will address the first point in Markram's letter): Modha's work addresses all of these.

*And for those calling for the Gordon Bell people to rescind their prize, please recall that the Bell prize is "awarded each year to recognize outstanding achievement in high-performance computing. The purpose of the award is to track the progress over time of parallel computing, with particular emphasis on rewarding innovation in applying high-performance computing to applications in science. Prizes are awarded for peak performance as well as special achievements in scalability."

 

 

Cat Fight Brews Over Cat Brain

12-1-09 Update: An explanation of the controversy.

Last week, IBM announced that they had simulated a brain with the number of neurons and synapses present in a cat's brain.

In February 2008, the National Academy of Engineering issued a grand challenge to reverse engineer the human brain, sweetening a pot neuroscientists had already been stirring for a long time. There are as many theories of mind as there are researchers working on it, and in some cases there is a real grudge match between the theorists. And maybe it's because they're both affiliated with IBM in some way, but it seems that none of these are more bloody than the one between IBM Almaden's Dharmendra Modha and EPFL's Henry Markram.

So it wasn't strictly a surprise when Henry Markram, the lead on the EPFL Blue Brain project, took umbrage at the publicity IBM's project received last week. He sent the following letter to IBM CTO Bernard Meyerson, CCing many members of the media, including reporters from the UK Daily Mail, Die Zeit, Wired, Discover, Forbes, and me.

Dear Bernie,

You told me you would string this guy up by the toes the last time Mohda made his stupid statement about simulating the mouse's brain.

I thought that having gone through Blue Brain so carefully, journalists would be able to recognize that what IBM reported is a scam - no where near a cat-scale brain simulation, but somehow they are totally deceived by these incredible statements.

I am absolutely shocked at this announcement. Not because it is any kind of technical feat, but because of the mass deception of the public.

1. These are point neurons (missing 99.999% of the brain; no branches; no detailed ion channels; the simplest possible equation you can imagine to simulate a neuron, totally trivial synapses; and using the STDP learning rule I discovered in this way is also is a joke).

2. All these kinds of simulations are trivial and have been around for decades - simply called artificial neural network (ANN) simulations. We even stooped to doing these kinds of simulations as bench mark tests 4 years ago with 10's of millions of such points before we bought the Blue Gene/L. If we (or anyone else) wanted to we could easily do this for a billion "points", but we would certainly not call it a cat-scale simulation. It is really no big deal to simulate a billion points interacting if you have a big enough computer. The only step here is that they have at their disposal a big computer. For a grown up "researcher" to get excited because one can simulate billions of points interacting is ludicrous.

3. It is not even an innovation in simulation technology. You don't need any special "C2 simulator", this is just a hoax and a PR stunt. Most neural network simulators for parallel machines can can do this today. Nest, pNeuron, SPIKE, CSIM, etc, etc. all of them can do this! We could do the same simulation immediately, this very second by just  loading up some network of points on such a machine, but it would just be a complete waste of time - and again, I would consider it shameful and unethical to call it a cat simulation.

4. This is light years away from a cat brain, not even close to an ants brain in complexity. It is highly unethical of Mohda to mislead the public in making people believe they have actually simulated a cat's brain. Absolutely shocking.

5. There is no qualified neuroscientist on the planet that would agree that this is even close to a cat's brain. I see he did not stop making such stupid statements after they claimed they simulated a mouse's brain.

6. You should also ask Mohda where he got the notion of "reverse engineering" from, when he does not even know what it means - look the the models - this has nothing to do with reverse engineering. And mouse, rat, cat, primate, human - ask him where he took that from? Simply a PR stunt here to ride on Blue Brain.

That IBM and DARPA would support such deceptive announcements is even more shocking.

That the Bell prize would be awarded for such nonsense is beyond belief. I never realized that such trivial and unethical behavior would actually be rewarded. I would have expected an ethics committee to string this guy up by the toes.

I suppose it is up to me to let the "cat out of the bag" about this outright deception of the public.

Competition is great, but this is a disgrace and extremely harmful to the field. Obviously Mohda would like to claim he simulated the Human brain next - I really hope someone does some scientific and ethical checking up on this guy.

All the best,

Henry

Tech Awards Honor Technology for Humanity

Here in Silicon Valley, where folks line up for the latest iPhone release and twitter their every waking thought, it’s often easy to forget that it’s not the sleekest and fastest and newest technology that makes the most difference. For most of the world, it’s about the basics—food, water, light.

Every year, the Tech Museum and Applied Materials bring together Silicon Valley luminaries with entrepreneurs from around the world to focus on just that kind of basic technology and the difference it can make. Last night, at the annual Tech Awards gala, entrepreneurs from fifteen organizations working to make that kind of difference were honored as laureates, five of those organizations received cash prices of $50,000 each, and former Vice President Al Gore accepted the 2009 James C. Morgan Global Humanitarian Award.

The laureates truly represented the world—they came from as far away as Nigeria, Mexico, South Africa, Botswana, India, Mozambique, and Brazil, and as close as nearby Emeryville, Calif. They’ve figured out ways to light the night with lanterns and boat-based solar charging stations, to water crops with simple drip irrigation systems made locally using laser drilling, to let people confirm that prescription drugs are what they say they by sending a simple text message, and to make people's lives better in diverse other ways.

Winners of the cash prizes were:
• Joseph Adelegan of Nigeria for “Cows to Kilowatts,” a project that decontaminates the waste stream from slaughterhouses and turns it into methane fuel.
• Alternative Energy Development Corp. of South Africa, for its zinc-air fuel cell power systems.
• The Akshaya Patra Foundation of India, for its school meals program, that uses high tech kitchens to serve millions of school children a morning meal daily.
• World of Good Development Organization, based in Emeryville, Calif., for its Fair Wage Guide Software that helps price local handmade goods around the world and encourages ethical trade.
• PATH, working in India, Brazil, and Colombia, for Ultra Rice, a nutrient fortified additive to standard rice.

For a full list of laureates, and information about their projects, click here.

Photo courtesy of Applied Materials

Energy Management Startup Wins 2009 CleanTech Open

The Cleantech Open crowned EcoFactor, an entrant in the Smart Power category from California, as winner of the 2009 Cleantech Open, a business competition created to find, fund, and foster startup clean technology companies. EcoFactor has designed a software system that communicates with home thermostats to reduce energy use. EcoFactor won $250,000 in cash and services.

The competition also honored two runners-up; Alphabet Energy, a waste-heat recapture venture, and my personal favorite, MicroMidas, a company with technology to transform sewage into plastic.

The Open itself was limited to the entries from California, the Rocky Mountain Region, and the Pacific Northwest. However, a separate, parallel competition, run with the help of the Kauffman Foundation, offered a prize of $100,000 in services to entries from around the world. The winner in this, the Global Cleantech Open Ideas competition, was Replenish Energy of Puerto Rico, a company that uses micro algae as a source of fuel.

California Sets Energy Standards for Televisions

Today, the California Energy Commission set energy efficiency standards for televisions, due to go into effect in 2011. According to Pacific Gas & Electric, the standards, which affect TVs with screen sizes 58-inches or smaller, will cut carbon dioxide emissions by three million metric tons over the next ten years and save $8.1 billion in energy costs. Under the standards, a 42-inch television would be limited to consuming 183 watts at most by 2011 and 115 watts by 2013; current Energy Star guidelines, which are voluntary, set 208 watts as a standard for 42-inch TVs. The California regulations will be mandatory; televisions that don’t meet them will not be allowed to be sold in the state. (Some 1000 sets on the market already meet the 2011 standards.)

The Consumer Electronics Association, an organization that represents the manufacturers of consumer electronics products, is vociferously protesting the regulations. The CEA managed to get the commission to delay its decision, originally expected on 4 November, by filing a 91-page brief just moments before the comment period closed. At a conference call with the press today, CEA representatives indicated that they still felt shortchanged by the process, because no changes were made in the proposed regulations as a result of their comments.

While the Energy Commission sees their regulation as saving consumers money in the form of energy costs, as well as having positive effects on the environment, the CEA sees the regulations as all bad.

Take TV prices, for example. The Energy Commission says that, according to their analysis, the regulations will not affect the prices of televisions. The CEA counters that regulation will drive up the purchase price, either because of direct costs of energy saving technology (which the CEA estimates at tens to hundreds of dollars) or because the regulations will force low-tech power-guzzling models off the market. Because these tend to be cheaper units from no-name manufacturers, competitors will then be able to raise prices.

The CEA also argues that besides pushing low-tech TVs out of the market, the regulations will prevent advanced-tech TVs from getting in.
 

 

Seth Greenstein, counsel to the CEA, says that whenever a new technology gets introduced, it needs time to be gradually optimized for performance. “If we were still in the world of CRT TVs or DLPs, and you wanted to introduce plasma to marketplace, you couldn’t,” he says.

Greenstein points out that OLED displays are still in development and 3-D televisions are expected to roll out in large numbers this year. The industry relies on sales to early-adopters to fund continued develoment of these technologies—but won't be able to if new technologies can't get to market.

“The real risk of the California Energy Commission's regulation," says Jason Oxman, the CEA’s Senior Vice President for Industry Affairs, is that there will be innovative technologies "that California consumers won’t be able to enjoy because of CEC regulations, that consumers in 49 other states enjoy.”

Realistically, it’s hard to imagine TV manufacturers ignoring the huge California market; it’s more likely that California regulations will influence the world. That's a thought that makes the CEA shudder.

Says Doug Johnson, the CEA’s Senior Director of Technology Policy, “We don’t want to take a bad idea and make it a national bad idea.”

The CEA hasn’t revealed its next move—and won’t until it has time to review the complete regulatory documents—but has indicated that it will make every effort to make sure that, in Oxman's words, “common sense is restored” in California.

Photo credit

Intel & AMD Both Claim Bragging Rights

Intel and AMD reached an agreement earlier this week aimed at ending a messy (read: costly) legal dispute over what AMD can do with a group of technologies the two chipmakers have cross licensed for most of this decade, and what Intel can't do to convince computer companies that it should be their exclusive chip supplier. But they still remain fiercely competitive. For AMD, the cherry on top of knowing that its bitter rival will soon fork over $1.25 billion as a penance for its anticompetitive behavior is finding out that the world’s fastest supercomputer runs on its chips.

The Cray XT5 “Jaguar,” which is at the U.S. Department of Energy’s Oak Ridge Leadership Computing Facility in Tennessee, tops the latest semi-annual list of the world’s 500 most powerful supercomputers. The system, which runs on AMD’s Opteron chips, is capable of doing 1.75 quadrillion floating point operations per second. Jaguar had been the runner up in June and last November. It now swaps places with IBM’s “Roadrunner” computer located at the Los Alamos National Laboratory in New Mexico. Roadrunner, which had held the title since the June 2008 ranking, was the first computer to break the petaflop barrier.

Still, Intel has no reason to hang its head over the news. The rest of the list paints a picture of its dominance. Of the remaining 498 slots in the rankings, computers with Intel inside occupy 402. AMD has the notoriety of having helped create the heavyweight champ. But to extend the boxing analogy further, consider this: if there were 10 weight divisions in the supercomputer game, Intel would hold eight of the title belts.

I guess both companies have reason to crow. And both have. Dueling press releases about the firms’ peta- and gigaflop achievements reached e-mail inboxes seemingly a nanosecond after the German and U.S. computer scientists who compile the list made the Top 500 announcement.

null

NASA finds water on the Moon, raising hopes for a Moon base

NASA is reporting that one of its spacecraft has found significant traces of water on the Moon.

The rocket from NASA’s Lunar Crater Observation and Sensing Satellite (LCROSS) slammed into a crater on the Moon's surface on October 9, creating a crater and uncovering soil below the lunar surface.

“We’re unlocking the mysteries of our nearest neighbor and, by extension, the solar system,” said Michael Wargo, chief lunar scientist at NASA Headquarters in Washington.

If the Moon becomes a waystation in future voyages to the planets, space travelers could possibly stock up on water at a base on the Moon.

Even if the Moon is not a waystation, the cost to maintain a permanent crewed base on the Moon seems to have just gone down. Water is relatively heavy to transport by spacecraft, and a big cost of maintaining such a base, it was previously thought, would be supplying its inhabitants with water. If there are local sources, a big impediment may have been removed. (It still remains to be seen how feasible it will be to gather this water.)

Some astronomers say that the finding of water in the lunar soil may be in keeping with the currently popular model of the Moon's formation, which holds that the Moon formed from the Earth's crust because of a giant impact early in the history of the Solar System. Some of the water common in the Earth's crust would have remained inside the moon's surface and in deep craters which do not get sunlight. Others think that the water came from comets which crash into the Moon periodically.

Whatever the source, on the Moon, sunlight breaks down water into its constituent hydrogen and oxygen, which escape into space because of the Moon's weak gravity. Any water on the surface that is exposed to sunlight has escaped this way, thus the surface seems waterless. But many astronomers have believed that there is water below the surface an in craters that don't get sunlight because of the Moon's tilt.

LCROSS slammed into one such crater near the Moon's south pole. NASA said LCROSS detected about 24 gallons of water in the lunar soil that was uncovered in the impact. That's a lot of water.

You can read more about the LCROSS mission here: lcross.arc.nasa.gov/

NASA held a press conference to announce the finding.

 

 

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More