Tech Talk iconTech Talk

Darpa, the Government Agency With the Best Imagination, Sends Competitors on a Search for Red Balloons

Who doesn’t love Darpa, the Defense Advanced Research Projects Agency? After all, we have Darpa to thank for the Internet.

In recent years, Darpa stands out as the agency that knows how to capture the imagination of scientists, engineers, and the general public. Take the Grand Challenges that sent autonomous vehicles across the desert. These were hugely difficult tasks that sent engineers all over the country scrambling night and day; the effort was big, as was the prize (US$ 2 million).

But even when the prize is small (just $40,000), it seems that Darpa knows how to go for the emotional punch, the vivid symbol that captures the imagination.

Because tomorrow, Saturday, 5 December, Darpa staff members will be tying up 10 red balloons at unannounced locations around the U.S. Each balloon will be visible from a public roadway. The challenge—be the first to identify the latitude and longitude of each balloon. You can work alone; you can work in teams. You can use any technical tool you can think of; you can simply cruise around looking for them. The balloons will only be visible on Saturday, but you’ll have up to nine days afterwards to submit your entry.

The contest is officially called the Network Challenge; the organizers assume that you won’t be able to win without the massive use of computer communications tools and social networks. And that’s what Darpa is trying to figure out—just how these things are used for collaboration today. It is also holding the event to commemorate the 40th anniversary of the day the first four nodes of the Arpanet—the predecessor to the Internet—were connected.

It’s a very cool task; sort of Rick Smolan, originator of the Day in the Life series of photography projects, meets Christo, the artist behind huge and colorful public installations.

But I think the real evidence of imaginative genius is the choice of the red balloon as the object to identify. There’s something magical about a balloon, a lonely balloon, tied in the middle of nowhere—French filmmaker Albert Lamorisse best captured that magic in the classic French movie, Le Ballon Rouge, but any child who spends a day with a balloon tied to his wrist or stroller understands that magic. Kudos to the folks at Darpa for understanding it too. And good luck to the competitors.

By the way, registration is open until the contest begins.

Photo: Darpa

What Does Real-Time Search Mean to Google?

I’m doing research on real-time search to learn what all the fuss is about, so I called up Google. Why not? They’re anxious to get “real-time” search into their results, and I wanted to know what the term means to them. Turns out they’re making a “cool” announcement about it Monday, and won’t talk to me till after that.

Much of the buzz about real-time search points to scouring Twitter feeds and Facebook status updates for the most current information on the web. But that’s pretty much the opposite of how traditional Google search works—which is based on producing results according to authority rankings established over time. So how will this real-time information get roped in with (or distinguished from) more traditional content searches, and how will the relevance of results be decided?

Forgetting the mechanics for a minute, let’s get to semantics. What does “real-time” even mean? Does it mean finding info that’s just been published, because it just happened? Or rather finding websites that have just been updated, but maybe the events they describe happened awhile ago? Is it just for Tweets and the like, or will it be more comprehensive?

Danny Sullivan’s Search-Engine Land has a lengthy post from a few months ago (definitely not real time, but still worth reading) that explores this topic and also compares search results from several smaller fish like Collecta and Scoopler. Sullivan argues that the term “real-time” should refer to information that’s posted immediately, as it happens—basically, Tweets and status updates, or what he calls “microblogging” (the comments section of the post provides nice counterpoint arguments). As for news and even blogs, Sullivan suggests, that content’s already history by the time it goes live on the web.

To get itself up to speed, Google announced in October that it’s including Twitter updates in its search results. And yesterday, TechCrunch posted that Google has integrated Twitter functionality into its Friend Connect application (its equivalent of Facebook Connect), which lets people log in to various websites using their Google account information. Now, any site that has Friend Connect enabled will allow you to log in with your Twitter account, too, which will automatically link your Twitter profile and let you tweet right from the site. Looks like Google and Twitter are getting pretty cozy.

So what’s next? Will Google be getting access to Twitter’s Firehose Feed, which would allow the company to search all Tweets as they happen and index them? I was assured that the upcoming announcement would answer all my questions, so maybe we’ll find out more about how Google plans to walk the line between what’s hot now, and what’s the most relevant answer to my current query.

I hope it also gives us a hint of Google’s planned audience for real-time search. Who is this kind of search useful for, and what’s the value added? Is it just for people looking to learn what’s going on, faster than they can read a newspaper, log in to a website, or even scan a blog post or RSS feed? What about someone doing research on the history of a vacation destination? Will real-time search trump regular old Google search?

We’ll try to post as fast as possible after Monday’s big announcement.

Tunnels Under Gaza

Photo credit: Ibraheem Abu Mustafa / Reuters

In this month's issue of IEEE Spectrum, I describe Gaza's jury-rigged power grid, and the challenges involved in providing electricity in a territory essentially cut off from the world. But ultimately, when the grid proves unreliable, people take matters into their own hands.

Just as war and the economic blockade have forced those managing the electricity grid to cope with the seemingly impossible task of providing steady power, ordinary Gazans have had to learn to improvise when possible. When electricity faltered during the war, residents exchanged tips on creative workarounds. A favorite is harvesting the low-level current that runs through the telephone lines to recharge cellphones.

But residents also rely on consumer goods—candles, batteries and generators—smuggled through the tunnels that connect southern Gaza to Egypt. Driving into a bombed-out section of Rafah, near the Egyptian border, I got a chance to see what, in the absence of a properly working grid, has by default become this option of last resort.

Because smuggling, in large part, relies on the other side turning a blind eye to the problem, smuggling anything through Israel would be impossible, and the consumer-goods tunnels all connect to Egypt. A large number of the tunnels had originated in houses (tunnels are typically owned by families), now smashed to rubble, but many tunnels are already open again, covered only in some cases by an impromptu shed to shield them from Israeli aircraft and drones that might spot them from above. We stopped to go down into the shafts of two tunnels—one for fuel and one for cement. (Tunnels are often designated for specific cargo. For instance, bigger tunnels are reserved for cement and other goods and are reinforced with wood; fuel tunnels can in some cases be smaller.) The tunnel owner asked that we photograph quickly; if Hamas security caught us, they would demand money.

During the 2008–2009 war, Israel bombed the tunnels, collapsing many of them, but a large number of the underground passageways remain, and new ones are constantly being dug. Smuggling operations quickly resumed.

From the Palestinian perspective, the tunnels act as a slow-release pressure valve, allowing just enough goods into the territory to prevent a catastrophe, but not enough to allow the economy to function normally. While Israel points to the illegal tunnels as another sign of Gazan—and particularly Hamas—dysfunction, Hamas naturally blames Israel. “If [the borders] are open, there are no tunnels,” Fawzy Barhoom, Hamas’s spokesman, says in an interview in his Gaza office.

In the meantime, the tunnels don’t help the power plant, says Rafiq Maliha, a plant manager, with a wry laugh. The idea of a sophisticated power plant using contraband fuel or parts is just unrealistic. “We are talking about a power plant,” he said. “We can’t smuggle [parts] through the tunnels.”

Hiding from the Internet

These days, engineers are working hard to make our network connections faster and more reliable. In this issue, David Schneider reports that communications satellite company Iridium is upping its maximum data rate fourfold, and Clark T-C. Nguyen suggests that mechanical components may be the way to make future wireless devices better. Information is indeed coming at us faster and in more places. But that’s not all good. In October, author Nathan Zeldes talked about the importance of getting control of this flood that threatens productivity and creativity.

For me, the only way to do get such control is to cut off my network connections, all of them, for three weeks every year.

I’ve taken this annual three-week Internet sabbatical for about as long as I can remember. (I didn’t used to call it that; I used to call it vacation.) It acts as a reset button. It reminds me that looking out into the distance instead of at the computer in front of me makes a real difference in how my eyes feel; that getting my hands off the keyboard and moving around regularly is good for my body; I sit differently, stand taller. It reminds me that having time to let my mind drift, instead of being yanked from demand to demand, can lead me to new ideas.

And when I go back to the keyboard, the computer screen, the Internet, I handle it better. I force myself not to check email quite so compulsively, to take the occasional stretch break. For a while, anyway.

This year, my Internet sabbatical took me to a lake in New England and a beach in New Jersey. We’ve gone to both places for years; I can remember when the New Jersey rental didn’t have television—too remote for broadcast TV. I liked it back when we didn’t own cell phones; the rental didn’t have a landline installed, so I told potential visitors not to bother to call ahead, just come on down and look for me on the beach. Now both rentals had hundreds of television channels, thanks to satellite TV, and my cell phone means I’m never out of touch, even when I’d like to be.

But I thought I could still escape from the Internet. I just didn’t know how hard it would be to hide.
In New England this year, we rented a cabin with nine other family members and friends. I didn’t bring any devices that could get online, but among the 14 of us staying there (six of them children), we’d packed in two full-size laptops, four netbooks, and an iPhone. Good luck with that, I thought, not expecting any access beyond dial-up; hope you downloaded plenty of movies ahead of time.

It turned out I was the one surprised—the cabin had both WiFi and wired Ethernet. So folks were on their computers constantly—watching silly YouTube videos, checking email, looking at weather reports and sports scores, googling this and that and everything. Instead of the lapping of the water on the little beach, I heard the beeps of games and music from the latest viral video. And, even worse, they kept calling me to come over to the screen, as if my life would not be complete were I not to see a certain clip of extreme sailing or last night’s Daily Show.

I resolved that I would not touch a keyboard for the entire vacation; wouldn’t look at a screen if I could possibly avoid it. I was worried that once I touched a computer I’d quickly surf over to my email and would get sucked into responding and bang, I’d never be able to break away again.

It got a little tense one afternoon when my husband’s computer wouldn’t log on to the wireless. Since I’m tech support in the family, he asked me to try to fix it. “I’m not touching the computer,” I said, “but I think the thing to do would be to go to the Apple menu, open system preferences, look at your Internet connection settings, and switch them to manual and then back to automatic, which will force it to reacquire the DHCP or something like that. Anyway, it should fix it.” (I could feel my little brain cells click into non-vacation mode, and I wasn’t liking it.)
He held the computer out to me. “It’d only take you a minute to fix it.” I kept my arms at my sides, and a cousin jumped in to help him. (Thank you, Jeff.)

I figured this torture would only last a week, because their was no way our landlord in New Jersey was going to have hooked our shabby but much-loved beach rental up to the Internet. I was right about that, what I didn’t count on was new community wifi—slow, but it worked in most of the rooms of the house. I was not pleased. I like listening to the ocean from the screened in porch, and find the street noise and occasional group house party sounds only occasionally irritating—but having someone out there tap tapping on the keyboard was sure to drive me nuts. (And I couldn’t keep them off the porch; the signal was strongest there.)

The temptation to get on line grew. My oldest son was not traveling with us, instead, he was with a school group performing at the Edinburgh Fringe Festival. And he hadn’t been calling. However, other folks on the trip were posting pictures online, so we could Internet stalk him to see if he was OK. I caved and looked at the photos twice while my husband paged through them—I did stick to my resolution not touch the keyboard. I realized afterwards that I would have been fine waiting until I got home, the photos didn’t tell me anything other than he had indeed made it to Scotland.

Like TV did, and phone did, the Internet is invading my vacation space. And pretty soon there’s going to be no place to hide.

Photo: John Lund/Paula Zacharias/Getty Images

Screen, Paper, and Information Overload

The following is a guest post by Nathan Zeldes, author of IEEE Spectrum's "How to Beat Information Overload

After I published my Spectrum article on Infoglut, I received many responses through a variety of online channels. That's one of the perks in being an active cyber-citizen: you get to meet interesting people. However, a number of respondents focused on the same complaint: the article, they claimed, was far too lengthy, thereby itself contributing to Information Overload.

This statement puzzled me, because I wrote the article in close cooperation with Spectrum’s capable editors, and in conformance with their expectations. My curiosity piqued, I pulled out some back copies of the magazine from my shelf and counted the pages. Turns out that my article is 3.5 printed pages long, well within the norm for Spectrum feature articles. In fact I recently blogged about the shortening of articles in printed magazines over the decades, using Scientific American as an example; that magazine’s features went down from 12 content pages per article in the sixties to 8 pages in 2009. My 3.5 pager would count as positively brief in Sci Am even today... So what is going on?

What made it even weirder is that looking at the article in the printed magazine, it did not appear particularly long; but even I had to admit that the online version, which, it turns out, is the version the complaining readers saw, does seem to go on and on...

Thinking it over, I realized that there is a key difference between reading online and on paper. The information density on paper is certainly greater: lots more can fit—and be consumed—on a printed page than can be put on a screen of the same physical dimensions. The text of my article fills seven vertical “screenfuls” on my 22-inch monitor; it fits on 2.5 double-page spreads of Spectrum, even though each such spread is a bit smaller than the monitor’s area. This means that I could take in the printed article in three “gulps”, versus seven online. Even more important, the printed version is random access; I can move my eyes across the pages and home in on what I want, taking my cues from the layout, section header typography, and images. The long-scrolling online version is serial access: you need to scroll patiently through uniform-looking text blocks to find anything, and your best strategy would be to read it start to finish serially to see what’s in it. That’s why we use books, and not the scrolls the ancients had: flipping through pages is much more efficient if you plan to skim.

This came as a surprise to me: I'm used to considering online reading as a valid alternative to print (until it comes to snuggling up in bed with a good novel, at any rate). Online has its advantages—searchability being a major one—but in an age of impatience, Gutenberg’s good ol’ system has its benefits as well!

Nathan Zeldes blogs on Information Overload here.

NORAD Ready To Track Santa Claus

The North American Aerospace Defense Command (NORAD) announced today it is ready to track Santa Claus's coordinates as soon as the red-helmeted old man takes off from the North Pole. No, don't worry, Santa will not be riding an ICBM-powered sleigh. This is just an annual public relations program by NORAD.

On 24 December, you (or your kids) can go to the NORAD Tracks Santa web site to see Santa's location in real time. You can also call 1-877-HI-NORAD (1-877-446-6723), send an email to, or check Facebook, Twitter, Flickr, YouTube, Google Maps, and Google Earth (look for @noradsanta).

If you think this is all very silly, it's probably because you don't have a pre-teen bombarding (no pun intended) you with questions about Saint Nick. If you did, you'd thank NORAD for this initiative, which is actually really cute (check out their FAQ describing Santa's sleigh technical specs and whether fighter jets might intercept him) and began with an even cuter little tale. In 1955, a Sears store in Colorado Springs ran newspaper ads telling children to call Santa on a special phone number. But the number had a typo. When the first kid phoned in on Christmas Eve, on the other end of the line was Colonel Harry Shoup at the Continental Air Defense Command (CONAD), NORAD's predecessor, in Colorado Springs.

After more calls came in, Colonel Shoup realized one should tread lightly when it comes to children and their patron saint. So he dutifully ordered his operators to find Santa's exact location and reported it back to every child who called that night. Ah, the Christmas spirit!

Below, NORAD's video of Santa's 2008 journey, captured with its Santa cams.


The Cat Brain Cliff Notes

Two leading scientists are embroiled in a controversy about a cat brain simulation. At first blush, the topic might seem silly. But the stakes are higher than the tired cat fight jokes would lead you to think. This argument has larger implications for the future of AI research, and particularly for a field called computational neuroscience. The controversy has called into question not only the legitimacy of one researcher's work, but of all brain simulation work. I think it's important to untangle the assumptions and accusations in clear, non-specialized language.


Last week at the 2009 Supercomputing conference, IBM researchers on DARPA's SyNAPSE project presented a paper about a simulation that they said approached the scale of a cat's in terms of the number of neurons and synapses involved. They won the Gordon Bell prize for their efforts. A few days later, neuroscientist Henry Markram sent an open letter refuting Modha's claims and calling his paper a hoax.

First, let's understand the stakes and who's playing. Computational neuroscience—or cognitive computing, or biometaphorical computing—is an emerging field that is new enough that its luminaries have not yet agreed on a solid name for it. The goal (very simplified) is to build a brain.

But you can't create a brain out of meat any more than you can make it out of popsicle sticks, so you're stuck with computer simulations. That's not something either neuroscientists or computer scientists can do alone. It takes a village of electrical engineers, materials scientists, neuroscientists, biologists, computer scientists, nanotechnology researchers, and more. All these specialists bring different talents and assumptions. As such, the field is a fairly good embodiment of the Tower of Babel story, or the old story of the blind men feeling the elephant and arguing about whether an elephant is a foot or a trunk.

Markram v Modha

Consider Dharmendra Modha and Henry Markram. One is a neuroscientist, the other is a computer scientist. "The conflict arises since both species are needed to simulate (computer scientist) a brain (neuroscientist)," says Massimiliano Versace, who works on the SyNAPSE project at the Department of Cognitive and Neural Systems at Boston University. "You need both."

Markram is a brilliant neuroscientist. He discovered spike timing dependent plasticity (STDP). For those of you whose heads just exploded, that means essentially he figured out how neurons learn. STDP is what makes you, a human, able to understand causality: you see a flash of lightning, hear a crack of thunder, watch a tree fall on your house, and you are able to say, "Aw, crap. Now I need to call my insurance company." Markram figured out how our neurons fire together and wire together. He's kind of a big deal.

Modha is also kind of a big deal. He figured out a new adaptive replacement cache algorithm. While I have no cute example to explain this, I will say that IBM was so thrilled about Modha's work (and the patent it yielded) that they gave him an office at their Almaden research center and carte blanche. Modha decided that he wanted to build a brain. The next year, Almaden's annual conference was on cognitive computing. DARPA soon came knocking.

But the point is this: "It's safe to say that Modha is as brilliant a computer scientist as Markram is a neuroscientist," Versace says.

SyNAPSE v Blue Brain

Their respective projects are as different as the two researchers. Markram's Blue Brain project at Ecole Polytechnique Federale de Lausanne, is a simulation running on a (slightly older, smaller) IBM Blue Gene. With his model, Markram is investigating the roots of neurological disorders like Alzheimer's disease, autism and depression. Markram says he is building the entire brain from the fundamental ground up, to illuminate how the summed activity of 100 billion neurons can result in consciousness. "Our whole sense of consciousness and memory exists because of the way our brains are wired," he says. He wants to engineer it. He wants a machine that will talk back to him someday, a machine that will be conscious inside the confines of its supercomputer. "That's the secret about Blue Brain," he says. "It will search the Internet by itself; it will search to become more real. Like a Pinocchio."

Modha is working on SyNAPSE, a project that couldn't be more different. With SyNAPSE, DARPA wants to create electronics that take a page out of the brain's book. The stated purpose is to "investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels. Everything you need to know about SyNAPSE is on the Neurdon blog, which is written by Boston University postdoc Versace and Ben Chandler, a PhD student in the department. BU is a subcontractor to HP and HRL, who compete with IBM on the DARPA project (this is the way most DARPA projects are run--three competitors make everyone work harder).

For an explanation of what makes brains so great, I went to Richard Granger (who is, again, kind of a big deal at the Dartmouth Brain Engineering Laboratory). "Brains somehow carry out hugely parallel algorithms, using very slow, sparsely connected, probabilistic elements, and they somehow scale well as they grow," he says. "Big-brained mammals exhibit capabilities far beyond small brains, and there evidently are not severely diminishing returns." Also, they use a very small amount of power.

If we could learn from the brain, what might we achieve? With circuits as reconfigurable, specialized, and fault tolerant as the brain, a new species of computer would be better at recognizing faces and driving a car than the world's most sophisticated supercomputers. Right now, Roadrunner, the supercomputer that comes closest to replicating a human's ability to drive in rush-hour traffic, weighs 227 metric tons and requires a diet of about 3 megawatts. By contrast, the brain regularly handles rush-hour driving on 20 watts (comparable to the power consumption of a Nintendo Wii), and its 1.5 kilograms fit neatly into your skull.

The upshot is this: Both Modha and Markram's projects are modeling the brain, but for different purposes, with different end results in mind, and so they focus on different aspects of brain modeling.

The Trouble With Models

But what both projects have in common is their goal of figuring out what makes the mammalian brain work the way it does. We are a long way off. As of 2009, no one has yet been able to simulate even the simplest, dumbest animal—a fruit fly, say—in the process of reacting to a drink of sugar water. "Fruit flies are actually really complicated," Chandler explains (see BU's CELEST project for more). "As far as I know, we haven't even created a synthetic aplysia or nematode yet."

Why is that? Jim Olds (who directs George Mason University's Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. "We need an Einstein of neuroscience," he says, "to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity." Here's what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?

But it's a chicken and egg problem, because without a brain-specific theory of relativity, there is no way to identify a basic unit of neuroscience. And without a basic unit of neuroscience, you can't build a model. So, to get anywhere, you must abstract. "[Markram] thinks it's the neuron," says Olds. "But what if that's not what makes a brain work as a brain? What if it's the synaptic spine, or the ion channels?" There are even neuroscientists who think you can never model the brain—because in order to do it you'd have to model down to quantum effects.

No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. "We do not have a definition of consciousness," says Granger. "Or, worse, we have fifteen mutually incompatible definitions."

No Six for You

It should be pretty clear at this point that no one's going to be building a Caprica Six any time soon. Or a cat. Or a fruit fly. But that doesn't mean all attempts at modeling any aspect of the brain are useless, or to use Markram's more loaded language, a hoax.

"To understand the brain—or any complex system—it is unavoidable that it be viewed computationally," Granger says. Granger says we should treat brain circuits as, well, circuits. "Circuits on one hand are physical systems; on the other, they are carrying out algorithms. To understand them does entail extensive simulation; but also computational, algorithm-level analysis, i.e., the elucidation of principles from simulation and experiment. Not just "that" they work, but how and why they work." So, can we arrive at deep insights without building a few starting simulations first?

"We definitely want to build simulations, including large, intermediate, and small-scale models, at low, middle, and high levels," says Granger. "The field has indeed been doing so for many years, and much of what we currently understand (such as the examples given) grew from many such models. The field is making great strides, and work such as Henry's, and many others', are prominent examples."

Hoax or No Hoax?

And ultimately, that is where Modha's work is important. It's not that he's created a cat brain. Every journalist who interviewed him has heard him correct them in the following way: "No, no, it's not a cat brain. A cat-SCALE simulation."

Some would ask whether there is a point to having a cat-scale simulation if it does not model with any great fidelity an actual cat. I think you should ask the Gordon Bell prize people*. Because for computer science, this is indeed a big deal. Neurdon's Ben Chandler says, "I don't think [other neural simulation environments] can handle 147,000 processors or 144 terabytes of memory. Actually mapping a system that large, and getting only a .3 percent deviation in workloads across cores is pretty impressive."

"We all want to understand how brain circuits do what they do," Granger says, "in particular, how their computational powers (perception, recognition, memory retrieval) manage to outperform our extant engineering systems. Let's avoid this (vast) discussion for now, and jump to this: To achieve large-scale simulations that exhibit critical characteristics of brain circuits, we will very likely need to address four current roadblock issues: speed, power, scaling, and parallelism."

A teaser for the next post (which will address the first point in Markram's letter): Modha's work addresses all of these.

*And for those calling for the Gordon Bell people to rescind their prize, please recall that the Bell prize is "awarded each year to recognize outstanding achievement in high-performance computing. The purpose of the award is to track the progress over time of parallel computing, with particular emphasis on rewarding innovation in applying high-performance computing to applications in science. Prizes are awarded for peak performance as well as special achievements in scalability."



Cat Fight Brews Over Cat Brain

12-1-09 Update: An explanation of the controversy.

Last week, IBM announced that they had simulated a brain with the number of neurons and synapses present in a cat's brain.

In February 2008, the National Academy of Engineering issued a grand challenge to reverse engineer the human brain, sweetening a pot neuroscientists had already been stirring for a long time. There are as many theories of mind as there are researchers working on it, and in some cases there is a real grudge match between the theorists. And maybe it's because they're both affiliated with IBM in some way, but it seems that none of these are more bloody than the one between IBM Almaden's Dharmendra Modha and EPFL's Henry Markram.

So it wasn't strictly a surprise when Henry Markram, the lead on the EPFL Blue Brain project, took umbrage at the publicity IBM's project received last week. He sent the following letter to IBM CTO Bernard Meyerson, CCing many members of the media, including reporters from the UK Daily Mail, Die Zeit, Wired, Discover, Forbes, and me.

Dear Bernie,

You told me you would string this guy up by the toes the last time Mohda made his stupid statement about simulating the mouse's brain.

I thought that having gone through Blue Brain so carefully, journalists would be able to recognize that what IBM reported is a scam - no where near a cat-scale brain simulation, but somehow they are totally deceived by these incredible statements.

I am absolutely shocked at this announcement. Not because it is any kind of technical feat, but because of the mass deception of the public.

1. These are point neurons (missing 99.999% of the brain; no branches; no detailed ion channels; the simplest possible equation you can imagine to simulate a neuron, totally trivial synapses; and using the STDP learning rule I discovered in this way is also is a joke).

2. All these kinds of simulations are trivial and have been around for decades - simply called artificial neural network (ANN) simulations. We even stooped to doing these kinds of simulations as bench mark tests 4 years ago with 10's of millions of such points before we bought the Blue Gene/L. If we (or anyone else) wanted to we could easily do this for a billion "points", but we would certainly not call it a cat-scale simulation. It is really no big deal to simulate a billion points interacting if you have a big enough computer. The only step here is that they have at their disposal a big computer. For a grown up "researcher" to get excited because one can simulate billions of points interacting is ludicrous.

3. It is not even an innovation in simulation technology. You don't need any special "C2 simulator", this is just a hoax and a PR stunt. Most neural network simulators for parallel machines can can do this today. Nest, pNeuron, SPIKE, CSIM, etc, etc. all of them can do this! We could do the same simulation immediately, this very second by just  loading up some network of points on such a machine, but it would just be a complete waste of time - and again, I would consider it shameful and unethical to call it a cat simulation.

4. This is light years away from a cat brain, not even close to an ants brain in complexity. It is highly unethical of Mohda to mislead the public in making people believe they have actually simulated a cat's brain. Absolutely shocking.

5. There is no qualified neuroscientist on the planet that would agree that this is even close to a cat's brain. I see he did not stop making such stupid statements after they claimed they simulated a mouse's brain.

6. You should also ask Mohda where he got the notion of "reverse engineering" from, when he does not even know what it means - look the the models - this has nothing to do with reverse engineering. And mouse, rat, cat, primate, human - ask him where he took that from? Simply a PR stunt here to ride on Blue Brain.

That IBM and DARPA would support such deceptive announcements is even more shocking.

That the Bell prize would be awarded for such nonsense is beyond belief. I never realized that such trivial and unethical behavior would actually be rewarded. I would have expected an ethics committee to string this guy up by the toes.

I suppose it is up to me to let the "cat out of the bag" about this outright deception of the public.

Competition is great, but this is a disgrace and extremely harmful to the field. Obviously Mohda would like to claim he simulated the Human brain next - I really hope someone does some scientific and ethical checking up on this guy.

All the best,


Tech Awards Honor Technology for Humanity

Here in Silicon Valley, where folks line up for the latest iPhone release and twitter their every waking thought, it’s often easy to forget that it’s not the sleekest and fastest and newest technology that makes the most difference. For most of the world, it’s about the basics—food, water, light.

Every year, the Tech Museum and Applied Materials bring together Silicon Valley luminaries with entrepreneurs from around the world to focus on just that kind of basic technology and the difference it can make. Last night, at the annual Tech Awards gala, entrepreneurs from fifteen organizations working to make that kind of difference were honored as laureates, five of those organizations received cash prices of $50,000 each, and former Vice President Al Gore accepted the 2009 James C. Morgan Global Humanitarian Award.

The laureates truly represented the world—they came from as far away as Nigeria, Mexico, South Africa, Botswana, India, Mozambique, and Brazil, and as close as nearby Emeryville, Calif. They’ve figured out ways to light the night with lanterns and boat-based solar charging stations, to water crops with simple drip irrigation systems made locally using laser drilling, to let people confirm that prescription drugs are what they say they by sending a simple text message, and to make people's lives better in diverse other ways.

Winners of the cash prizes were:
• Joseph Adelegan of Nigeria for “Cows to Kilowatts,” a project that decontaminates the waste stream from slaughterhouses and turns it into methane fuel.
• Alternative Energy Development Corp. of South Africa, for its zinc-air fuel cell power systems.
• The Akshaya Patra Foundation of India, for its school meals program, that uses high tech kitchens to serve millions of school children a morning meal daily.
• World of Good Development Organization, based in Emeryville, Calif., for its Fair Wage Guide Software that helps price local handmade goods around the world and encourages ethical trade.
• PATH, working in India, Brazil, and Colombia, for Ultra Rice, a nutrient fortified additive to standard rice.

For a full list of laureates, and information about their projects, click here.

Photo courtesy of Applied Materials

Energy Management Startup Wins 2009 CleanTech Open

The Cleantech Open crowned EcoFactor, an entrant in the Smart Power category from California, as winner of the 2009 Cleantech Open, a business competition created to find, fund, and foster startup clean technology companies. EcoFactor has designed a software system that communicates with home thermostats to reduce energy use. EcoFactor won $250,000 in cash and services.

The competition also honored two runners-up; Alphabet Energy, a waste-heat recapture venture, and my personal favorite, MicroMidas, a company with technology to transform sewage into plastic.

The Open itself was limited to the entries from California, the Rocky Mountain Region, and the Pacific Northwest. However, a separate, parallel competition, run with the help of the Kauffman Foundation, offered a prize of $100,000 in services to entries from around the world. The winner in this, the Global Cleantech Open Ideas competition, was Replenish Energy of Puerto Rico, a company that uses micro algae as a source of fuel.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More