Tech Talk iconTech Talk

Cat Fight Brews Over Cat Brain

12-1-09 Update: An explanation of the controversy.

Last week, IBM announced that they had simulated a brain with the number of neurons and synapses present in a cat's brain.

In February 2008, the National Academy of Engineering issued a grand challenge to reverse engineer the human brain, sweetening a pot neuroscientists had already been stirring for a long time. There are as many theories of mind as there are researchers working on it, and in some cases there is a real grudge match between the theorists. And maybe it's because they're both affiliated with IBM in some way, but it seems that none of these are more bloody than the one between IBM Almaden's Dharmendra Modha and EPFL's Henry Markram.

So it wasn't strictly a surprise when Henry Markram, the lead on the EPFL Blue Brain project, took umbrage at the publicity IBM's project received last week. He sent the following letter to IBM CTO Bernard Meyerson, CCing many members of the media, including reporters from the UK Daily Mail, Die Zeit, Wired, Discover, Forbes, and me.

Dear Bernie,

You told me you would string this guy up by the toes the last time Mohda made his stupid statement about simulating the mouse's brain.

I thought that having gone through Blue Brain so carefully, journalists would be able to recognize that what IBM reported is a scam - no where near a cat-scale brain simulation, but somehow they are totally deceived by these incredible statements.

I am absolutely shocked at this announcement. Not because it is any kind of technical feat, but because of the mass deception of the public.

1. These are point neurons (missing 99.999% of the brain; no branches; no detailed ion channels; the simplest possible equation you can imagine to simulate a neuron, totally trivial synapses; and using the STDP learning rule I discovered in this way is also is a joke).

2. All these kinds of simulations are trivial and have been around for decades - simply called artificial neural network (ANN) simulations. We even stooped to doing these kinds of simulations as bench mark tests 4 years ago with 10's of millions of such points before we bought the Blue Gene/L. If we (or anyone else) wanted to we could easily do this for a billion "points", but we would certainly not call it a cat-scale simulation. It is really no big deal to simulate a billion points interacting if you have a big enough computer. The only step here is that they have at their disposal a big computer. For a grown up "researcher" to get excited because one can simulate billions of points interacting is ludicrous.

3. It is not even an innovation in simulation technology. You don't need any special "C2 simulator", this is just a hoax and a PR stunt. Most neural network simulators for parallel machines can can do this today. Nest, pNeuron, SPIKE, CSIM, etc, etc. all of them can do this! We could do the same simulation immediately, this very second by just  loading up some network of points on such a machine, but it would just be a complete waste of time - and again, I would consider it shameful and unethical to call it a cat simulation.

4. This is light years away from a cat brain, not even close to an ants brain in complexity. It is highly unethical of Mohda to mislead the public in making people believe they have actually simulated a cat's brain. Absolutely shocking.

5. There is no qualified neuroscientist on the planet that would agree that this is even close to a cat's brain. I see he did not stop making such stupid statements after they claimed they simulated a mouse's brain.

6. You should also ask Mohda where he got the notion of "reverse engineering" from, when he does not even know what it means - look the the models - this has nothing to do with reverse engineering. And mouse, rat, cat, primate, human - ask him where he took that from? Simply a PR stunt here to ride on Blue Brain.

That IBM and DARPA would support such deceptive announcements is even more shocking.

That the Bell prize would be awarded for such nonsense is beyond belief. I never realized that such trivial and unethical behavior would actually be rewarded. I would have expected an ethics committee to string this guy up by the toes.

I suppose it is up to me to let the "cat out of the bag" about this outright deception of the public.

Competition is great, but this is a disgrace and extremely harmful to the field. Obviously Mohda would like to claim he simulated the Human brain next - I really hope someone does some scientific and ethical checking up on this guy.

All the best,


Tech Awards Honor Technology for Humanity

Here in Silicon Valley, where folks line up for the latest iPhone release and twitter their every waking thought, it’s often easy to forget that it’s not the sleekest and fastest and newest technology that makes the most difference. For most of the world, it’s about the basics—food, water, light.

Every year, the Tech Museum and Applied Materials bring together Silicon Valley luminaries with entrepreneurs from around the world to focus on just that kind of basic technology and the difference it can make. Last night, at the annual Tech Awards gala, entrepreneurs from fifteen organizations working to make that kind of difference were honored as laureates, five of those organizations received cash prices of $50,000 each, and former Vice President Al Gore accepted the 2009 James C. Morgan Global Humanitarian Award.

The laureates truly represented the world—they came from as far away as Nigeria, Mexico, South Africa, Botswana, India, Mozambique, and Brazil, and as close as nearby Emeryville, Calif. They’ve figured out ways to light the night with lanterns and boat-based solar charging stations, to water crops with simple drip irrigation systems made locally using laser drilling, to let people confirm that prescription drugs are what they say they by sending a simple text message, and to make people's lives better in diverse other ways.

Winners of the cash prizes were:
• Joseph Adelegan of Nigeria for “Cows to Kilowatts,” a project that decontaminates the waste stream from slaughterhouses and turns it into methane fuel.
• Alternative Energy Development Corp. of South Africa, for its zinc-air fuel cell power systems.
• The Akshaya Patra Foundation of India, for its school meals program, that uses high tech kitchens to serve millions of school children a morning meal daily.
• World of Good Development Organization, based in Emeryville, Calif., for its Fair Wage Guide Software that helps price local handmade goods around the world and encourages ethical trade.
• PATH, working in India, Brazil, and Colombia, for Ultra Rice, a nutrient fortified additive to standard rice.

For a full list of laureates, and information about their projects, click here.

Photo courtesy of Applied Materials

Energy Management Startup Wins 2009 CleanTech Open

The Cleantech Open crowned EcoFactor, an entrant in the Smart Power category from California, as winner of the 2009 Cleantech Open, a business competition created to find, fund, and foster startup clean technology companies. EcoFactor has designed a software system that communicates with home thermostats to reduce energy use. EcoFactor won $250,000 in cash and services.

The competition also honored two runners-up; Alphabet Energy, a waste-heat recapture venture, and my personal favorite, MicroMidas, a company with technology to transform sewage into plastic.

The Open itself was limited to the entries from California, the Rocky Mountain Region, and the Pacific Northwest. However, a separate, parallel competition, run with the help of the Kauffman Foundation, offered a prize of $100,000 in services to entries from around the world. The winner in this, the Global Cleantech Open Ideas competition, was Replenish Energy of Puerto Rico, a company that uses micro algae as a source of fuel.

California Sets Energy Standards for Televisions

Today, the California Energy Commission set energy efficiency standards for televisions, due to go into effect in 2011. According to Pacific Gas & Electric, the standards, which affect TVs with screen sizes 58-inches or smaller, will cut carbon dioxide emissions by three million metric tons over the next ten years and save $8.1 billion in energy costs. Under the standards, a 42-inch television would be limited to consuming 183 watts at most by 2011 and 115 watts by 2013; current Energy Star guidelines, which are voluntary, set 208 watts as a standard for 42-inch TVs. The California regulations will be mandatory; televisions that don’t meet them will not be allowed to be sold in the state. (Some 1000 sets on the market already meet the 2011 standards.)

The Consumer Electronics Association, an organization that represents the manufacturers of consumer electronics products, is vociferously protesting the regulations. The CEA managed to get the commission to delay its decision, originally expected on 4 November, by filing a 91-page brief just moments before the comment period closed. At a conference call with the press today, CEA representatives indicated that they still felt shortchanged by the process, because no changes were made in the proposed regulations as a result of their comments.

While the Energy Commission sees their regulation as saving consumers money in the form of energy costs, as well as having positive effects on the environment, the CEA sees the regulations as all bad.

Take TV prices, for example. The Energy Commission says that, according to their analysis, the regulations will not affect the prices of televisions. The CEA counters that regulation will drive up the purchase price, either because of direct costs of energy saving technology (which the CEA estimates at tens to hundreds of dollars) or because the regulations will force low-tech power-guzzling models off the market. Because these tend to be cheaper units from no-name manufacturers, competitors will then be able to raise prices.

The CEA also argues that besides pushing low-tech TVs out of the market, the regulations will prevent advanced-tech TVs from getting in.


Seth Greenstein, counsel to the CEA, says that whenever a new technology gets introduced, it needs time to be gradually optimized for performance. “If we were still in the world of CRT TVs or DLPs, and you wanted to introduce plasma to marketplace, you couldn’t,” he says.

Greenstein points out that OLED displays are still in development and 3-D televisions are expected to roll out in large numbers this year. The industry relies on sales to early-adopters to fund continued develoment of these technologies—but won't be able to if new technologies can't get to market.

“The real risk of the California Energy Commission's regulation," says Jason Oxman, the CEA’s Senior Vice President for Industry Affairs, is that there will be innovative technologies "that California consumers won’t be able to enjoy because of CEC regulations, that consumers in 49 other states enjoy.”

Realistically, it’s hard to imagine TV manufacturers ignoring the huge California market; it’s more likely that California regulations will influence the world. That's a thought that makes the CEA shudder.

Says Doug Johnson, the CEA’s Senior Director of Technology Policy, “We don’t want to take a bad idea and make it a national bad idea.”

The CEA hasn’t revealed its next move—and won’t until it has time to review the complete regulatory documents—but has indicated that it will make every effort to make sure that, in Oxman's words, “common sense is restored” in California.

Photo credit

Intel & AMD Both Claim Bragging Rights

Intel and AMD reached an agreement earlier this week aimed at ending a messy (read: costly) legal dispute over what AMD can do with a group of technologies the two chipmakers have cross licensed for most of this decade, and what Intel can't do to convince computer companies that it should be their exclusive chip supplier. But they still remain fiercely competitive. For AMD, the cherry on top of knowing that its bitter rival will soon fork over $1.25 billion as a penance for its anticompetitive behavior is finding out that the world’s fastest supercomputer runs on its chips.

The Cray XT5 “Jaguar,” which is at the U.S. Department of Energy’s Oak Ridge Leadership Computing Facility in Tennessee, tops the latest semi-annual list of the world’s 500 most powerful supercomputers. The system, which runs on AMD’s Opteron chips, is capable of doing 1.75 quadrillion floating point operations per second. Jaguar had been the runner up in June and last November. It now swaps places with IBM’s “Roadrunner” computer located at the Los Alamos National Laboratory in New Mexico. Roadrunner, which had held the title since the June 2008 ranking, was the first computer to break the petaflop barrier.

Still, Intel has no reason to hang its head over the news. The rest of the list paints a picture of its dominance. Of the remaining 498 slots in the rankings, computers with Intel inside occupy 402. AMD has the notoriety of having helped create the heavyweight champ. But to extend the boxing analogy further, consider this: if there were 10 weight divisions in the supercomputer game, Intel would hold eight of the title belts.

I guess both companies have reason to crow. And both have. Dueling press releases about the firms’ peta- and gigaflop achievements reached e-mail inboxes seemingly a nanosecond after the German and U.S. computer scientists who compile the list made the Top 500 announcement.

NASA finds water on the Moon, raising hopes for a Moon base

NASA is reporting that one of its spacecraft has found significant traces of water on the Moon.

The rocket from NASA’s Lunar Crater Observation and Sensing Satellite (LCROSS) slammed into a crater on the Moon's surface on October 9, creating a crater and uncovering soil below the lunar surface.

“We’re unlocking the mysteries of our nearest neighbor and, by extension, the solar system,” said Michael Wargo, chief lunar scientist at NASA Headquarters in Washington.

If the Moon becomes a waystation in future voyages to the planets, space travelers could possibly stock up on water at a base on the Moon.

Even if the Moon is not a waystation, the cost to maintain a permanent crewed base on the Moon seems to have just gone down. Water is relatively heavy to transport by spacecraft, and a big cost of maintaining such a base, it was previously thought, would be supplying its inhabitants with water. If there are local sources, a big impediment may have been removed. (It still remains to be seen how feasible it will be to gather this water.)

Some astronomers say that the finding of water in the lunar soil may be in keeping with the currently popular model of the Moon's formation, which holds that the Moon formed from the Earth's crust because of a giant impact early in the history of the Solar System. Some of the water common in the Earth's crust would have remained inside the moon's surface and in deep craters which do not get sunlight. Others think that the water came from comets which crash into the Moon periodically.

Whatever the source, on the Moon, sunlight breaks down water into its constituent hydrogen and oxygen, which escape into space because of the Moon's weak gravity. Any water on the surface that is exposed to sunlight has escaped this way, thus the surface seems waterless. But many astronomers have believed that there is water below the surface an in craters that don't get sunlight because of the Moon's tilt.

LCROSS slammed into one such crater near the Moon's south pole. NASA said LCROSS detected about 24 gallons of water in the lunar soil that was uncovered in the impact. That's a lot of water.

You can read more about the LCROSS mission here:

NASA held a press conference to announce the finding.



A Pro-Antitrust Administration?

In 2001 the new Bush Administration snatched victory from the hands of U.S. Justice Department officials when it settled a three-year antitrust battle with Microsoft after the department had already won the case. The Justice Department’s antitrust division went into near-dormancy after that, but today, eight years later, antitrust is back in the headlines.

A month after the Justice Department began investigating antitrust claims against IBM and Sony disclosed it was under antitrust investigation for possible price-fixing, New York State attorney general Andrew Cuomo filed a major antitrust suit against Intel.

The Intel lawsuit comes hard on the heels of the European Commission’s record-setting fine of €1.06 billion (US $1.45 billion), imposed on Intel for antitrust practices, specifically for giving “loyalty” rebates or discounts to customers who agreed to use fewer (or none) of competitor AMD’s products. And despite a joint announcement yesterday that Intel and AMD are settling their differences and moving forward peacefully, the outstanding government lawsuits against Intel aren’t likely to be going away. The settlement “doesn’t change the evidence at all,” says John Peirce, an antitrust lawyer and partner at Bryan Cave.

Bloggers point out that New York’s Cuomo is using much of the same evidence as the European Commission, and more, with the U.S. Federal Trade Commission’s investigative aid.

But is the European example a good predictor for how the suit will go down in the U.S.?

In 2004, the EU's European Commission fined Microsoft US $613 million for violating EU antitrust laws, a decision the US DOJ criticized, rather than following suit. Of course, that was in the middle of the Bush Administration’s anti-antitrust fervor (or is it antitrust antifervor?).

More to the point, as pointed out at the time of the EU’s Intel case, there are some key differences between the EU and U.S. antitrust systems. While then-U.S. Justice Department antitrust attorney Tom Barnett took some heat for criticizing the EU Microsoft decision in 2004, his pronouncements at least

had the benefit of informing U.S. companies that they would not adopt the European approach, and that U.S. law was importantly different because it required a more rigorous form of economic analysis and more substantial evidence of consumer harm rather than speculative possiblity [sic] theorems coupled with harm to competitors.

Though Barnett was speaking from the Bush administration perspective, which was fairly lax on antitrust, one point is still worth noting: Obama’s Justice Department will have to make a case that consumers were substantially harmed by Intel’s actions during a time when PC and laptop prices plummeted.

So what about now? New York attorney David Mazur suggests that the European case against Intel still might not pave the way:

Individual judges and regulators have widely varying opinions as to the appropriateness of using foreign court decisions as precedent (or even as a definitive statement of the facts), and it’s very possible that Intel was implementing different strategies in dealing with European OEMs.

And Bryan Cave's Peirce points out that while the EU regulators expressed “a good deal of skepticism” about Oracle’s takeover of Sun Microsystems in April, the Obama Justice Department decided it wasn’t a problem, and the merger took place. “If they were going to be hawkish,” Peirce says, “I’d expect to see them aligned with Brussels. And they’re not.”

So while the Obama administration has indicated a more aggressive stance toward business practices of companies with huge market share, Peirce thinks “it’s too early to say” which way they’ll swing on antitrust. “They haven’t done anything yet,” says Peirce. And “a government investigation doesn’t mean that anybody has done anything wrong.”

Still, the road won’t be easy for Intel, and the NY AG’s suit is a major hurdle. According to the Huffington Post:

Technology analyst Rob Enderle said Intel may be facing a harder fight this time. He said the company worked through its problems "elegantly" in the 1990s, but has become "much more combative" in the past decade in its dealings with regulators. "This could represent one of the biggest dangers that Intel has ever faced," Enderle said.

If and when the administration decides to take concrete action on antitrust cases, Mazur concludes,

The Obama administration will not only have leeway to determine which companies are investigated, but will also influence how to deal with adjudicated monopolists. For example, though the Clinton administration had been pursuing a structural remedy in the Microsoft case (breaking the company up), the Bush administration took this option off the table, opting instead to pursue behavioral remedies (drawing up an agreement that limits the way the company can act).

Some business executives aren’t waiting to get nervous. In an interview this week with CNET, Google CEO Eric Schmidt acknowledged what CNET called the company’s “new role as the No. 1 target for U.S. antitrust regulators.”

Is Deep Brain Stimulation a Cure-all?

Is there anything that can't be fixed by burrowing an electrode array into the deep tissues of the brain? With varying degrees of success, deep brain stimulators have been used to temporarily defog clouds of chronic depressionstamp out migraines before they cycle out of control, and steady the movement of people with Parkinson's disease. Well, now you can add Tourette syndrome to this list of maladies.

A study in the current issue of Neurology follows 18 patients with severe Tourette syndrome for two years after having neurosurgery. During the procedure, an array of electrodes is inserted into the patient's thalamus. Wires running from the device connect to a pulse generator implanted just beneath the skin on the chest. All the parts are internal and, when activated, stimulate a highly targeted area of the brain.

Those who continued with the treatment (3 either dropped out or were removed by the researchers) showed a significant decrease in the motor and verbal tics associated with the disorder. In all but one patient, obsessive behaviors and symptoms of depression improved, both of which are commonly present in patients with severe Tourette syndrome.

The authors are refraining from drawing any conclusions until more controlled experiments have been conducted and they acknowledge the risks inherent in an invasive therapy like DBS.

But even the preliminary results beg the question: how can one treatment option yeild a good outcome for so many different disorders? And, overwhelmingly, the answer is, "we don't know." What we do know is which parts of the brain are best targeted for different disorders, suggesting that we know much more about the "where" than the "why" with DBS.

Andrea Cavanna, a lead author on the study, explains that "little is known about the patho-physiology of Tourette syndrome. However, dysfunction in the fronto-basal pathways seems to play a relevant role in tic generation and associated behavioral problems. The targeted thalamic nuclei are the final common outputs of the involved pathways."

So, they try it, and it seems to work. But I can't help thinking that researchers will be able to perfect therapies like deep brain stimulators only once we have fully explained the disorders they seek to treat. And that this, rather than the technology, is what holds us back.

Netbooks Are Only Part of The Solution

Netbooks are going to be huge, much bigger than they already are. Trust me on this. I say this not because I see more and more people working on them in cafes instead of on standard laptops—though I do. It’s not because I particularly want one—though for short trips I can see the appeal. It’s not because on a recent multifamily vacation one family showed up with one netbook per child.

It’s because my 70-something aunt, the one with the 30-year-old radio that you can only turn off by pulling the plug, and the TV that gets its signal from a 50-plus-year-old two-wire cable, just told me she’s thinking of getting a netbook.

Oh, it’ll be a couple of years before she actually makes the purchase, but the fact that she’s evening considering it is huge. The appeal for her is the cost, for sure—if it turns out to be a mistake, it won’t be a huge mistake. But what also is drawing her is also the fact that netbooks don’t look all that high tech. They don’t take up much room, they don’t have a lot of extra buttons on the keyboard, and they don’t do vast numbers of things she wouldn’t want to do anyway—like edit video or spend hours typing long documents.

But she has been thinking that it would be pretty cool to look up a fact she read somewhere but just can’t remember exactly, or check out a new medication prescribed by her doctor before she orders it. And that’s enough usefulness to make her part with $250 or so. Once she gets one, I’ll show her how she can keep up with all her grandnieces and nephews on Facebook, and she’ll be set.

Unfortunately, much as I would have liked to, I didn’t run out that moment and get her a new netbook. Because there’s one piece of this puzzle missing—some kind of community wi-fi access. It doesn’t have to be free, it doesn’t have to be fast, but it has to be there; easy to get to at a reasonable price.

Forget dial-up—netbooks don’t even come with built-in modems, and these days the bells and whistle of most web sites mean dial up is just too slow to be viable. Cable modem or DSL would mean new wiring in her home (she’s got one corded wall phone right now, no other jacks), and a box that would have to be installed somewhere, set up, and occasionally rebooted. I can’t see convincing her to go through that hassle and expense.

But community wi-fi would be perfect. She’d need nothing but the netbook, the monthly fee would be reasonable, and, while likely slower than cable or DSL, it’d be moving plenty fast for her needs.

Which got me wondering—what happened to community wi-fi, anyway? I called Sascha Meinrath, research director of the New America Foundation’s wireless future program. He told me that it’s been going great in Europe, but in 2004 or 2005 got sidetracked in the U.S. “The rationale of community wireless, bringing low-cost or free wireless to the masses, got usurped by the corporate model,” he says, with companies trying to figure out “how do we charge money for it.” And the corporations that cities contracted with to build low-cost systems didn’t have a lot of incentive to make those systems succeed, since they’d be competing with their own, higher cost Internet access offerings. Earthlink, for example, last year shut down its community wireless systems in Philadelphia and New Orleans.

The good news, Meinrath told me, is that community wireless in the U.S. may be starting a new surge. He sees encouraging signs in the efforts of Meraki, a Google-backed startup that’s building low-cost wireless networks for companies, universities, and communities, and other low-cost efforts. He’s starting to see municipal and community groups who looked at community wireless in the past but got put off by the apparently high costs getting ready to take another look at it. And, he says, the $7.2 billion in stimulus funds targeted at increasing broadband access can only help; he’s hoping communities will spend that money on low-cost open source systems instead of expensive proprietary systems to make it go as far as possible.

Now back to my aunt. She still wants that netbook—with Internet access, but without a box in her house. Community wi-fi may be coming, but not soon enough. So I’m thinking, next time I’m visiting I’m going to boot up my laptop and see if I’m picking up any signals; if I am, I’ll go knock on a few doors and see if I can borrow a cup of broadband.

Making Talking On A Cell Phone While Driving Just A Little Bit Safer

Even though we all know that talking on a cell phone while driving is not the safest activity in the world, we’re not giving it up. We are, of course, going hands free—in many states, that’s the law.

But short of turning off our phones all together, there’s not much else we can do. Or is there? Eyal Ophir and his colleagues in Stanford University’s CHIMe Lab (that stands for Communications Between Humans and Interactive Media) say there is. They’re currently writing up the results of their latest study on multitasking while driving.

This is the same group Stanford researchers who, in August, dropped a dime on multitaskers everywhere by proving that folks who think they are getting more done by juggling multiple tasks instead of focusing on just one are kidding themselves, and generated a media buzz.

This time, they looked at their favorite research subjects—multitaskers of two breeds, low and high. In general, they found, high multitaskers are easily distractible—that could be a good thing when they’re having a heated cell phone conversation yet catch a glimpse of something on the road ahead that could cause trouble. Low multitaskers are less distractible, which could mean that they’re more likely to focus on their cell phone conversation than the road.

The trick was to build some kind of tool that can help both types of multitaskers pay attention to their driving when it’s most critical.

I ran into Ophir at the unveiling of a new autonomous vehicle—perhaps the ultimate solution to the driver distraction problem, but one that won’t be parked in the average driveway anytime soon. He told me about the tool he came up with.

The concept—moving the voice coming from the cell phone around in the car. Not hard to do, with today’s virtual surround sound technology. Ophir designed a system that puts the voice up at the driver’s level when road conditions are relatively safe, then drops it down to the driver’s feet when conditions are more hazardous. He says he could have done it the opposite way and it appears that it would have worked equally well, but that research has shown that voices coming from lower than the speaker are less dominant, hence his choice of high and low. He tested the system with drivers in a simulator, and found that drivers quickly learned that a change in position of the voice meant, “Pay attention to the road!”  They later rated the cell phone conversation as less distracting when the sound was coming from their feet.

In the real world, Ophir sees this system linked to the driver’s GPS and a database of accidents, to identify potentially treacherous areas of road. Ophir expects to publish this research soon.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More