Tech Talk iconTech Talk


Homebrewed Nukes

I've been doing research into school science projects, and came across this interesting item, billed as "The Ultimate Science Fair Project" - homebrewed nuclear reactors.

It's not as off the wall as it seems, and brings to mind the forgotten story of gamer Cameron Sneed.  Twenty-two-year-old  Sneed lived with his parents in Rockwall, Texas, a small town east of Dallas. He’s an auto school dropout, but he was also a resourceful geek who loves to make things. While working as a coder at a local telecom, Sneed got the video game S.T.A.L.K.E.R.:  Shadow of Chernobyl. He couldn’t afford a new PC to play the shooter, which was set in the post-meltdown hellscape of the infamous nuclear power plant. So Sneed tweaked his old graphics card and cooling device to keep it from overheating.

The game was good, full of radioactive mutants to kill. But the graphics were lacking, so Sneed created a “mod” to fix them. He uploaded his mod and promoted it on the site Ars Technica a gathering place for hackers and tinkerers. More than 50,000 people downloaded Sneed’s tweaked version of the game, and PC Gamer named it Mod of the Year. “I'm glowing with pride that this project came out of our own community,” effused one geek on the Ars Technica forum.   Sneed,  unemployed, basked in the glory for a while.  Then he decided to build something a little more ambitious: A nuclear reactor.  “The whole point of the project was to prove to myself that you can breed materials with little expense in your garage and do it relatively safely,” he said. 

He wasn’t the first.  In 1994, a 17-year-old misfit named David Hahn gained notoriety after a failed attempt to build a fast breed nuclear reactor in his parent’s backyard. Hahn, dubbed “The Radioactive Boy Scout,” can’t shake his hobby. He was  rearrested after stealing several smoke detectors, presumably to harvest their Americium-241.  Hahn’s mug shot shows a face dotted with lesions, which were caused by repeated exposure to radiation:.

But Sneed was determined not to become another David Hahn.  Unlike him, Sneed wouldn’t sneak around or steal; he’d build his reactor in public. Last November, he logged onto Ars Technica and posted: “I will document all experiments and injuries along with odd phenomenon such as opening the gates of hell.”  Sneed snagged some Americium-241, as well as some natural radioactive ore on eBay. He boasted of producing Plutonium-239, a component in nuke weapons. He later wrote, “I melted a large hunk of uranium out of one side of an ore chunk. I am concerned that background radiation level in my office and bedroom have almost doubled”   Meanwhile, posters in the Ars Technica forum begged Sneed to stop. “Do not ionize or vaporize uranium!” one geek wrote. “It's not the radiation that will kill you, it's the fucking heavy metal toxicity.”   Sneed ignored such warnings. “I am no David Hahn and am not as stupid,” he posted, “I HAVE built a functioning breeder Aluminum+Lead shield, but some radiation is escaping.  I’ll beef it up.”

He didn’t get the chance. Agents from the FBI and the Texas Department of State Health Services' Radiation Control Program showed up at his parent’s house. They’d been tipped off by someone on Ars Technica.   Because there were not dangerous radiation levels yet and since the materials were legally obtained, Sneed was not arrested. FBI spokesman Mark White admired Sneed’s handiwork, saying “if he had kept his experiment going, it probably wouldn't have blown up.”

Life Among the Internet Natives

My two youngest children—now 14 and 11—are Internet natives. They relate to the Internet in the way I relate to running water—when I need water, I turn on the sink; when they need information, they open a browser. (My oldest child, now 18, isn’t quite in the same space; he’s more like someone who emigrated as a child, he’s comfortable, he speaks the language, but he still has connections to the old country. He’s been known to go to the library to find information he needs, for example.)

Since I’m a relatively happy Internet immigrant, I mostly forget how different the Internet has made my children’s world from the one I grew up in, and continues to change it. But sometimes I’m struck by the ubiquity of the technology. And it doesn’t always happen in the highest tech environment.

This year my daughter competed as part of her high school’s mock trial team. Mock trial is a high school competition, with county, state, and national tournaments.  Students study a case, field defense and prosecution teams, and then try the case in front of a real judge and a jury made up of legal experts. When I watched my daughter’s team compete as part of the California Mock Trial Program, the courtroom was as traditional as it gets—an old courthouse, heavy oak furniture, the judge in his black robes.

The case itself, however, a murder trial in which a comedian is accused of killing someone who gave him a bad review on an online ratings site, turned on Internet technology. There were no witnesses, no DNA evidence. There was a little low-tech evidence, in the form of tire tracks, but these only put the defendant’s car at the scene, not the defendant.

Instead, they had an email and two tweets.  To me, the Internet immigrant, it seemed odd that both the defense and the prosecution were whipping out this information as evidence; to the Internet natives on the teams, however, it made perfect sense, for what they do on the Internet is as real as what they do in the real world.

First, the email—the defendant sent a personal message to the critic through the online website, “YellUp” (perhaps a loosely disguised “Yelp), giving him a last chance to remove the review, and threatening, if he doesn’t, “to do more than ruin [his] livelihood”. The detective (my daughter) has discovered this during a search; up for pretrial debate--was that search was legal? The detective had a warrant to search the defendant’s car, house, and computer, and all records or information on purchases, no matter where stored. She viewed the browser history, then clicked through into the Yell-Up site to find the message. The pretrial arguments centered on whether or not this was admissible under the search warrant, since YellUp is not a shopping site, or whether only data on the computer itself should have been searched, and not data in the cloud.

Then, there was a twitter message, also introduced by the prosecution. The defendant tweets, "I'm going to kill tonight and shut up the critics once and for all." The defense didn’t argue that tweets should be inadmissible as evidence, but instead brought forward witnesses who explain that “kill” is a term used by comedians to describe putting on a great show.

The prosecution wasn't the only team pulling evidence out of the Internet. The defense brought out a tweet as an alibi, arguing that the defendant couldn’t have murdered the victim at the time in question because he had tweeted from a computer, not a phone, around the same time.

My daughter’s team was knocked out at the county level, having won a few and lost a few. But other teenagers around the country will continue for the next few months to argue a murder case based on Internet evidence. And they won’t realize at all that they really are living in a new world.


John Carmack's Lifetime Achievement

This week, news comes that one of the most influential programmers in the videogame industry, John Carmack, will be receiving a lifetime achievement award at next month's Game Developer's Conference in San Francisco. Carmack joins luminaries including Will Wright and Shigeru Miyamoto.  Carmack is known for popularizing the first person shooter genre with innovative games such as Wolfenstein 3-D, Doom, and Quake.  I spent months interviewing him for my first book, Masters of Doom, and was always impressed by his commitment to his craft - and his dedication to sharing his knowledge (and code) with others. In honor of his honor, here's an unpublished excerpt from my Masters of Doom interviews (this one took place in 2000) in which Carmack tells me why he enjoys programming. 

JOHN CARMACK:  It is one of those things where it really has this wonderful sense of rightness.  There’s something about working on the programming where you’re able to create things out of thin air.  It's so flexible and such a creative medium. You’ve got these almost limitless possibilities.  If you could think about it and figure out the right puzzle piece way of fitting it together, then you could then make it happen. [I enjoy this] sense of being in this self-contained world where you don’t need a machine shop full of tools or you don't need to order supplies from different places. You've got your computer and your basic development tools and you can just sit down there and it’s up to you.  It’s not anybody else’s fault if it doesn’t work.  It’s all you.  And I guess that’s part of the thing.  I’ve never been a team player, I don’t like team activities or anything like that.  That’s probably a good chunk of it.  You don’t have to rely on anybody else when you’re working on the programming stuff. It’s very cut and dried. If it follows the logical progression of the rules established, it will work.  Everything makes sense.  Even when tracing out the hardest, most awful bug and it’s kind of random and it doesn’t seem to follow any rhyme or reason, you can always come back the bedrock of this:  it does make sense, you just don’t understand it yet.  It's great when you find something that seems so horribly random and you find out that you really understand it.  That does happen in all forms of engineering, but it’s just so much more fluid and rapid in computers.  Comparing against some of the people that I deal with in the auto racing stuff or rocketry stuff, they still have the same types of things when they finally understand why something didn’t work right,  But the difference between that and the computer stuff is it may take so much longer, the tests are so much cruder, you can’t repeat the things and in the end you may have burned and broken various other things or had to wait  weeks for new parts to get in.  But with a computer, you can just work at it until you can’t work anymore.  Eventually, it is always possible to get it.  There is hardly any time when you can say 'this is not possible to find.Computers are deterministic things.  At some point you can go in and start emulating the entire machine, cycle by cycle, and find out exactly what’s happening.  That’s probably the big thing:  in the end it all makes perfect sense and it’s accessible sense.  It’s not like some form of high energy physics or something where you spend a decade of your career preparing for that one big blast of the particle accelerator - and then work for five more years analyzing this stuff.  You can find the truth in programming on a much more rapid scale.


At the Mobile World Congress that took place from 15–18 February in Barcelona, Texas Instruments announced the commercial launch of a chip that will allow even the thinnest flip-style cellular handsets to feature miniature projectors. These so-called pico projectors can create 640-by-360-pixel images as big as 50 inches diagonal. TI's latest digital light processor, or DLP, chip exploits the company's MEMS technology, whereby millions of tiny moveable mirrors reflect red, green, and blue light from LEDs onto a wall or curtain.

The chips will also start appearing in digital cameras this year, which means no more crowding around someone's SLR to see the shots he or she just took.

Earlier-generation chips using TI’s digital light-processor technology are already making their way into larger handsets and freestanding projectors. The freestanding units, which are the size of a deck of playing cards, let people travel with all they need for business presentations or can be used as add-ons to media players, gaming consoles, and laptop computers.

For more on what's out there and what's to come, take a look at an IEEE Spectrum video podcast featuring palm-size projectors put through their paces.



A Three Axis Gyroscope with Just One Sensor

Smart phones that don’t know where they are or where they’re going are seeming less smart by the minute. [That point is made in the February IEEE Spectrum news article, “A Compass in Every Smart Phone.”] Besides GPS, phones phones with electronic compass functions need accelerometers and, increasingly, digital gyroscopes.

 “Cellphone companies continually demand smaller size, less power, and lower cost,” says Jay Esfandyari, MEMS product marketing manager at STMicroelectronics. But there have been some important limits.

Heretofore in gyroscopes, movement about the three axes was measured by three separate sensing structures—one for pitch, one for yaw, and another for roll.  At most, two would be combined on a single die. The best you could do was, say, a 3-by-5-by-1-mm yaw sensor matched up with a 4-by-5-by-1-mm sensor that would detect pitch and roll. But now ST has managed to make a 4-by-4-by-1-millimeter MEMS gyroscope whose single sensing structure tracks all three angular motions. “The aim now,” says Esfandyari, “is to eventually shrink them down to 3 mm square, which is the average footprint of accelerometers inside smart phones.

The gyroscope comes preset with one of three sensitivity levels, which allow the device to trade speed for resolution. For gaming, it can capture movement as quick as 2000 degrees per second, but can only distinguish movements larger than 70 mllidegrees. The version for user interfaces—say a wand or a wearable mouse, which track smaller, more controlled shifts such as pointing and clicking on a computer screen—can pick up movements as fast as 500 degrees per second. It can distinguish movements of 18 millidegrees or more. The most sensitive version, which only picks up 250 degrees per second can sense the slightest movements, anything greater than 9 millidegrees.

The gyroscope also represents a high water mark in terms of energy consumption, according to ST. The new 3-axis device draws 6 milliamps; two years ago, ST’s single-axis gyroscopes drew 9 mA. The device gets more done with less energy because it operates in what’s called flip mode. The mechanical structure of the device is always on, but the sensing structure is off when a gadget’s direction-finding function is not in use. When the gyroscope is needed, the sensing structure can be flipped on and made ready to record movement readings in less than 40 milliseconds. ST aims to reduce the lag to roughly 15 ms within the next year or so. Esfandyari says this haste in turning the sensing structure on and off is critical because in applications such as dead reckoning, missing the initial movements will make it almost certain that every subsequent reading will be in error. 



The Social Panopticon And You

Over the past couple of weeks, Google has gotten repeated bloody noses from tech journalists over the Buzz debacle. Before Buzz, Google’s “Don’t Be Evil” philosophy was backed up by a lot of carefully thought-out and well-executed applications. By comparison, Buzz was so uncharacteristically tone-deaf that people went from 0 to "conspiracy theory" in 60 seconds. Personally, I think Buzz was an honest mistake from a company that skews young, and young people are notorious for being laissez faire about privacy concerns.

Buzz is a symptom. The disease is the social media panopticon. Since Jeremy Bentham posited it as the ideal architecture for guarding inmates, the panopticon has been a popular stand-in for the all-seeing eye of the state (just Google “UK and panopticon” to see this dead horse beaten into fine dust). But the rolling media freakouts about Facebook picture tagging have illustrated that the bogeyman isn’t the government. It’s us watching us. We are the panopticon, man! Soylent green is people!

I wasn’t always like this. Once upon a time, friends would try to spook me with tales of Carnivore or Total Information Awareness, and my response was essentially, “make my day, punks!” Visions of airless basements filled my head, hapless poindexters drowning under unceasing floods of information overload.

I am dismayed to say, however, that the confluence of three developments has changed my mind completely.

1. Every single person on this earth has a camera phone and a blog
2. Facial recognition software and real time search are very close to eliminating the anonymizing effect of the data glut
3. Your employer is starting to wonder what you do when you’re not at work

I am now looking for a milliner who specializes in aluminum.

1. If you see something, say something

Back in January, some callow jerk started posting uncharitable photos of N-train commuters and their various offenses. Truly, it was little more than a tedious compendium of uninteresting irregularities: looking through a large purse while wearing a colorful scarf; putting on makeup; being overweight; being homeless. This was a mediocre data point in an exploding trend of cell phone camera auteurs posting their sartorial observations on their various blogs.

What made this story unique was that the N Train bit back: various Gothamist commenters, outraged by the attack on their privacy, did some basic detective work to broadcast his real name and likeness. There’s now even a special Twitter feed (Revenge of the N Train) whose goal is to catalog sightings of the guy.

It was all made possible by Google’s caching feature. When the Gothamist crowd started to hone in on his true identity, the N Train chronicler immediately jacked up all his Facebook privacy settings. Then he quickly erased some incriminating personal Tweets. No dice, buddy: the Gothamists posted an impressive array of screenshots of his cached personal Facebook page alongside older, pre-edited versions of his personal Twitter feed. “Now, if a potential employer Googles Pete Malachowsky, they'll find a Gothamist article talking about how creepy he is,” wrote commenter Hotcup gleefully in the article’s epilogue. “Serves his creepo a** right.”

On its own, this is a heartwarming tale of a city banding together and giving someone a taste of his own medicine. But now consider how difficult it will be for Malachowsky to clear his record. This excellent report from Cornell Information Technologies lays out the steps he would need to take to get the information removed from Google.

You must go through their policy process for removing information from their caching technology. Not only is that a lot of bureaucracy, but also you should know that while Google is the dominant search engine on the Internet today, it might not be tomorrow. Moreover, other search engines operate currently on the Internet and so it is not just Google whom you might have to contact in order to remove a page.

Now just imagine having to go through this if you haven't done anything to deserve it.

2. Can You See Me Now?

Consider also the development of video and image recognition software. Right now, governments and corporations are heavily funding face recognition software, governments for purposes of defeating terrorism, corporations for purposes of making awesome augmented reality apps. As with every technology ever, any great military capability will trickle down to the average person with a blog. Real-time search will pre-sort and catalog every single bit of piffle that hits the Internet--including that picture of you picking your nose on the N train.¿

A year ago, the launch of MOBVIS building image-recognition software proved that computers can now autonomously identify individual buildings. The sure-to-be-upcoming Google Face app will have your image sorted and catalogued the second it is created. So imagine you’ve had your picture snapped by five different people today, all populating their mediocre fame-whore blogs. The new generation of information aggregators will suck up those pictures, pre-sort them and slap your name before you can say “tinfoil hat.”

3.  "Sometimes I doubt your commitment to Sparkle Motion"

Recently, NPR sent out a missive to its journalists. The jist of it was this: Don't do anything off work hours that you wouldn't intentionally do to represent the company.

Your Facebook page, your blog entries and your tweets – even if you intend them to be personal messages to your friends or family – can be easily circulated beyond your intended audience. This content, therefore, represents you and NPR to the outside world as much as a radio story or story for does.

I'm not singling out NPR. Universities have instituted policies that spell out that certain activities, done off the premises and ostensibly outside the penumbra of the University’s authority, will get your butt kicked out of school. At the University of Arkansas, for example, athletes and Greek Life members must have their facebook profiles open to screening at all times. These preventive policies were put in place to allow organizations to police online activities much the way they police other behaviors (political protesting, sexual relationships, alcohol consumption, etc).

I’m not saying NPR shouldn’t try to inoculate itself against some of the truly bad PR that can result from employees gone wild. The law firm Norton Rose is probably still putting raw steak on the black eye it received from the Claire Swire incident almost a decade ago. The Carlyle Group was poorly represented by the young Lothario whom it relocated to S. Korea, and then promptly re-relocated to the unemployment line after his conquest-bragging emails reached top brass.

Beyond policing existing employees, it’s been much reported that companies are using social networking sites to vet their potential hires (prompting a phenomenally popular New York Times Facebook privacy how-to by Sarah Peretz). The Peretz article puts the number of companies using Facebook in particular at 30 percent. “In today's tough economy,” Peretz goes on to say, “the question of whether to post those embarrassing party pics could now cost you a paycheck in addition to a reputation. (Keep that in mind when tagging your friends' photos, too, won't you?)" Sure, but what happens when I can tag people who are not my friends and I don’t care about the consequences to their lives?

In conclusion, here's my nightmare scenario: After an especially rough day at Sparkle Motion Corp., I get on the N Train cradling a big bottle of Jim Beam. An officer of the law takes me downtown for public inebriation, and my perp walk is captured for posterity by a jerk with a camera phone and a blog. Even though his dumb blog has only three readers, Google Face identifies my sodden visage by name, and puts those pictures into the eternal damnation of Google Image search results. My employer, seized with a sudden itch to make sure its employees are representing Sparkle Motion as well as they possibly could be, finds the evidence of my malingering ways, and I get fired. In a terrible economy, my next job application is derailed by a simple Google Search.

Am I overreacting? Can someone please give me The Talk I used to give my paranoid friends?



A Canticle For DARPATech

I’ll never eat Pentagon m&ms again. A DARPA spokesperson has confirmed that there will be no more DARPATech, the Defense Advanced Research Projects Agency’s bi-annual (but occasionally annual, and at other times occurring only every three years) conference, at which the latest and greatest “mad science” technologies go prime time. A quick eulogy is in order.

DARPATech 2007 featured a particular bumper crop: Robot arms, unmanned autonomous robot Humvees, all-seeing blimps, an autonomous insect-like robot called "Little Dog," a robo-beast of burden called Big Dog, and a deputy director’s bikini-clad demonstration of a core-temperature regulating glove. There was even a bracing dose of reality.

Yes, DARPATech was probably a PR boondoggle meant to remind news outfits that the Defense Department isn’t just about killing people. But is that so wrong? Of all the Defense agencies, DARPA is probably the best-run. DARPA program managers have four-year contracts, and they never get the chance to become career bureaucrats. After their term is up, they are told to skedaddle no matter the status of their project. The agency is low on bureaucracy and high on ideas. And the ideas are life-changing.

However, the Obama administration is likely avoiding highly visible celebrations of war. That might be an unfair description of DARPATech, but how else would you characterize 3,000 defense contractors hanging out at a convention so elaborate and shiny that it makes a trip across the street to Disneyland (literally) seem boring?

Most likely, the biggest reason is money. When Danger Room blogged the 2007 convention, reporter Sharon Weinberger observed that the best kept secret at DARPAtech was "how much it costs."

According to Yudhijit Bhattacharjee at ScienceInsider, the FY2011 budget for DARPA is $2.9 billion. Though the agency lost $100 million from 2010, they shifted $200 million to basic research (bringing that amount to $2 billion). To get to that number, DARPA said that it had to chop some "low priority weapons development programs." Also: shiny conventions across the street from Disneyland.

The DARPA spokesperson told me that the agency has been pursuing “different arrangements.” In January, for example, they hosted the DARPA Industry Summit in Washington, DC “to discuss key globalization issues,” and he says that DARPA expects to hold similar meetings in the future.

Full disclosure: I am an unabashed DARPA fangirl. For me, this is very sad news indeed.


(Blue) Brain and Beauty

A couple weeks ago, I was invited to see the new Rolex Learning Center at the Ecole Polytechnique Fédérale de Lausanne.  Architecture critics are fawning over the $65 million building.  A writer in the Guardian yesterday compared it to "some filmic version of the afterlife."  But to me what's even more remarkable is what's going on nearby at the school:   they're building a brain.

The Ecole Polytechnique Federale de Lausanne is a grim campus of sparsely-windowed gray buildings thirty minutes north of Geneva, Switzerland.  When I was last there a couple years ago, I met Marcus Bartschi, a wry, 43-year-old computer engineer, who sat at a desk in a dim office with a rusty saw hanging inexplicably on a bare concrete wall.  He had a scruffy black beard and dark circles under his eyes, brought on by his two-week-old son.  “My boy has only physical needs now,” Bartschi told me, resignedly, “there’s not much for me to do.”  So he tended to his other baby:  a four ton supercomputer named Blue Gene.  Developed by IBM, Blue Gene is one of the world’s most powerful high-performance machines, and it was being used here for a suitably super, some say impossible, mission:   to simulate a brain.   Bartschi had the vulnerably human task of keeping it alive.  “Hellooooo,” he cooed, as the laptop monitoring Blue Gene fires up.  

The project, dubbed Blue Brain, is the dream of Dr. Henry Markram, the neuroscientist in charge of the EPFL’s Brain and Mind Institute.  Several years ago, Markram approached IBM with an ambitious plan:  to accelerate the normal path of research by building a three-dimensional model of a mammalian brain, as he said, “in silico.”   Skeptics such as MIT’s Marvin Minsky are already questioning what they call won’t rule it out completely.  “We do not yet know if consciousness will emerge in these artificial brains,” he has said, “and we will consider the ethics of this if this happens.”  

Bartschi, a diehard Hitchhiker’s Guide fan, was more skeptical about his Blue Gene baby, “this is no HAL,’ he said  Blue Gene, however, was robust (with a processing speed of 22.8 tera-flops per second, the eight fastest supercomputer in the world) and affordable ($2 million per rack) enough to give Markram his crack.  The goal was to first simulate a single neocortical column, then take another ten or so to replicate an entire brain.  While Markram’s team tends to the data-crunching software, Bartschi had the less glamorous job of caring for Blue Brain’s pillar:  Blue Gene.  “It’s a lot of pressure,” he said, with a sigh, “I want it to work 100% of the time.” 

After the fanfare of the announcement, Bartschi oversaw the installation of the four refrigerator-size racks, which resembled the black monoliths from 2001:  A Space Odyssey.   After two weeks of assemblage, he triumphantly booted up Blue Gene – only to see it repeatedly, and mysteriously, shut down.  For two weeks, Bartschi searched for an explanation, only to discover that a solid steel beam underneath the raised floor was obstructing 50% of the airflow required to keep it cool.  It took another two weeks for Bartschi’s team of six to schlep Blue Gene down the hall.  The next month, a crucial fan broke, and there was no replacement on site.  The part was ordered from Rochester, New York, but, due to a mix-up in shipping, was sent low priority – holding up Blue Gene for more than a week.  Despite such tantrums, Bartschi had developed a soft spot for the machine.  “The whole setup is complex," he said,  “In this sense, it is a living thing.”


Enhanced Imagination Drives Brain-Computer Interface

It's been clear since brain computer interfaces were developed, that customizing these devices would require learning both on the part of the machine and the human. New research in the Proceedings of the the Academy of the Sciences gives evidence that humans quickly adapt to BCIs.

A team of neurologists and computer scientists at the University of Washington recruited epilepsy patients awaiting surgery and recorded their brain activity with electrocorticography (electrodes attached to the surface of the brain) before and after they manipulated a simple BCI. You can find the full article here, to the right of the press release.

First of all, here's what they did. They recorded during three circumstances: when patients imagined moving their hand, when they actually moved it, and when they moved a computer cursor by manipulating a BCI. The activity during the imagined task mapped roughly onto the recordings from the actual movement, but were less powerful. When the patients hooked up to the BCI, the pattern was again similar, but the signal much stronger than both the other recordings.

The press release pitched this as evidence that BCIs are a "workout" for the brain. I don't completely buy this. The brain isn't a muscle and more activity doesn't necessarily mean it's operating at a higher level. What it does indicate (to me), and what I find far more interesting, is that people can quickly change their brain activity to accommodate BCIs. It also shows how important visual feedback is to people who are manipulating these devices. Experiments like this seem like a good way to maximize the level of feedback a user is getting and to test out different ways of delivering it.

It's also substantial proof that the brain activity produced when we imagine a movement or task can effectively drive BCIs. Every group that's developing BCIs right now is doing it slightly differently. So far, there is no clear consensus on which brain signals should be used.

This is the first paper I've seen that focused fully on what brain activity looks like when it's manipulating a BCI. The output of the setup was a cursor moving on a screen. The experiment is a good indication that BCIs have become well enough understood that we can use them in experiments as tools to once again study the brain itself.

That being said, there are also some really interesting things to be learned from this article about the brain in general and the difference between imagining and actuating movement. Here are a couple points that may get you to read it and some questions you can respond to if you do.

1. During both tasks, high frequency signals increase while low frequency signals decrease. Does this mean that part of attending to a task is muting some of the competing activity?

2. Of these two, it is the signal that decreases which map similarly in both imagery and movement. Does this mean you could further localize an area that controls movement imagery in the high frequency signals?


AT&T Wireless Adds Windows Phone 7 Series

Microsoft is a distant third when it comes to mobile phone operating systems, but the company has a legendary marketing ability to hang tough and emerge on top. So the long-awaited announcement of Windows Mobile 7 (or “Windows Phone 7 Series,” as Microsoft now styles it) was picked over by the trade press like a leftover turkey carcass on the Saturday after Thanksgiving.

It didn’t take long for the vultures to notice that Microsoft named AT&T its “premier partner” in the United States (Deutsche Telekom AG, Orange, SFR, Sprint, Telecom Italia, Telefónica, Telstra, T-Mobile USA, Verizon Wireless and Vodafone are others around the world)—nor for them to start circling the two partners with questions.

As PC World's Tony Bradley pointed out, no one really knows what a premier partnership consists of. Does it matter? Let the griping begin. AT&T already can’t handle the volume of data that iPhones are generating, and beginning in April, iPad data will only swamp it further. And now AT&T is going to take on a new data-heavy collection of smartphones from Microsoft?

Speaking as a a founding iPhone user from its initial release in June 2007, I can say that each and every complaint about AT&T is borne out by my own experience. Neither the quality of the phone nor the data connection seems related to the number of bars, and calls are dropped repeatedly and at random—it’s rare for me to complete any but the shortest conversations without redialing. And that's as true of my shiny 3GS as it was of the original phone.

I can’t count the number of people who’ve told me they’re getting an iPhone—as soon as it becomes available on their current carrier, which is inevitably Verizon.

In the 10 years or so (pre-iPhone) that Verizon was my carrier, I never experienced a dropped call that seemed random. Sure, there were occasional dead spots—the New York State Thruway a few minutes north of the Harriman toll plaza, for example—but calls were (forgive the term) reliably dropped there, and never in other frequently-travelled places. And sure, once in a while I couldn’t make a call, but whenever the carrier opened a connection, it stayed up for the duration of the call, no matter how long.

Verizon seems to have such confidence in its network that it is now allowing Skype calls on a large fraction of its smartphones.

That is, the carrier is letting subscribers use the 3G data network to make voice-over-IP calls, which cuts down on the number of minutes a user needs and loads the data network with those calls instead. That’s right: fewer billable minutes, more unbilled data. Other than a better user experience for its customers, there’s absolutely nothing good about this from Verizon’s point of view.

True, the carrier is making the best of some upcoming rule changes at the U.S. Federal Communications Commission, but it says a lot that Verizon is just going ahead with the change early, and doing so in the most open and user-friendly way, while AT&T fights the rule change tooth and nail.

I don’t miss Verizon’s chaotic billing practices (the ones that are so bad they’ve inspired a Website called, but I do miss being able to call Customer Service late at night and on Sunday, yet another way in which AT&T's user experience falls short. Most of all, I miss Verizon’s rock-solid network.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More