Tech Talk iconTech Talk

A glowing floating balloon patterned to look like an eyeball hovers in the evening sky outdoors.

CeBIT 2016: The Aerotain Skye Could Be Your Friendly Floating Camera Drone

Editors Note: This week IEEE Spectrum is covering CeBIT, the enormous information and communications technology show that takes place annually in Hanover, Germany. For up-to-the-second updates, you can follow our CeBIT Ninja, Stephen Cass, on Twitter (@stephencass), or catch daily highlights throughout the week here. 

Once upon a time there was a very odd British television show called The Prisoner, which featured a secret agent repeatedly attempting to escape from a mysterious village. One of the biggest threats the agent faced was a giant balloon called Rover, which would pursue and subdue rule-breaking villagers. Now Rover has been brought to reality, albeit in a much more adorable version, thanks to the engineers at Aerotain and their Skye inflatable drone.

The Skye is a 3-meter-diameter controllable balloon that’s filled with helium for buoyancy. Dotted around the surface are propellers whose direction can be adjusted, spinning the balloon or moving it around as required. There’s also the option to add an internal projector to display moving images on the balloon’s skin. Skye has been used at events as a crowd-pleaser, but it can also be used a platform for aerial photography by adding cameras.

Read More
null

Meet the Guy Whose Software Keeps the World’s Clocks in Sync

Clocks sprang forward last weekend in about 75 countries. Over time, technology has spared many citizens who observe daylight saving time the headache of physically changing their clocks. Electronic clocks automatically reset the time—a subtle convenience made possible by the rise of the global Internet, a network of real-life atomic clocks, and a physicist who has spent decades finding smarter ways to distribute time.  

In many cases, the internal clock that ticks away in a laptop or desktop computer is synchronized to an official time service maintained by the U.S. National Institute of Standards and Technology (NIST). This free service shares Coordinated Universal Time with personal devices, Web browsers, financial trading software and e-mail programs throughout the world. The service receives 150,000 requests per second (roughly 16 billion a day) from systems that repeatedly ask, “What time is it?

“If you have a PC, it's probably synchronized to the time service,” says Judah Levine, the man who originally built servers and programmed software to send time over the Internet for NIST back in 1993.

Here’s how it works: electronic clocks are programmed to check in (once an hour, on average) and record the time from a network of 20 “timeservers” scattered throughout the U.S. Three of those servers—two in Boulder, Colo., and a third in Fort Collins, Colo.—are physically linked to atomic clocks, the newest of which is so accurate, it gains or loses only a second every 300 million years. At those sites, an electric pulse signifying the start of each second is generated by the atomic clock and delivered to a server.

Once the pulse has arrived at a server, the signal is translated to the precise hour, minute and second of the day in Coordinated Universal Time using a string of characters sent separately from the clock to the server. This code enables the server to identify a given second as 16:02:56 UTC, for example. 

Next, this information is shared with the other NIST servers through a phone line and distributed to devices and systems over the Internet, primarily in 48-byte packets via the Network Time Protocol. Adjustments for time zones, daylight saving time, or leap seconds are made by Internet service providers or handled by instructions built into a network or device.

While many nations operate an official time service, NIST has the highest-capacity timekeeping network. It’s also the most popular. Jeff Sherman, a NIST physicist, recently tracked requests to two NIST timeservers for one month. He found requests originating from 316 million unique IP addresses, which he estimates represents about 8.5 percent of all the devices currently connected to the Internet. Those addresses were registered in 244 nations.

This system has served the Internet well for more than 20 years. But Levine expects demand for the time service to grow exponentially with the rise of internet-connected wearables and household electronics. These days, the 76-year-old physicist is thinking hard about how to prepare the network for the onslaught of requests it expects to receive in the era of the Internet of Things.

“The growth rate has sort of been steady at a few percent a month,” Levine says. “We should be able to handle the next two or three years of growth. After that, we're going to have to think again.” He isn’t even sure how many requests per day the current system could handle.

One way to absorb such growth might be to add more layers to the time service. For personal devices that do not require the precise time down to the thousandth of a second, companies such as Apple, Google or Microsoft could make a single request to NIST and then provide the time indirectly to thousands of customers through layers, or stratums, of service.

“One would hope that if a toaster is connected to the Internet,” Sherman says, “it doesn't need the same degree of accuracy as a power plant and someone would arrange for that toaster to be in that stratum system.” In fact, many companies already do this—but more may need to adopt the practice as the Internet of Things takes off.

Another option would be to rely more heavily on alternatives to the NIST time service that already provide the time to many devices and networks. For example, most cell phones rely on GPS satellites maintained by the U.S. Naval Observatory to track time. Many Web browsers synchronize to clocks maintained by other sources through the Network Time Protocol, which pre-dates Levine’s service.

For now, NIST is considering restructuring its timeservers so that every server is directly linked to an atomic clock. Levine says the plan is to build a fourth atomic clock at NIST’s headquarters in Gaithersburg, Md., and maintain about 12 timeservers total between the four sites. This would simultaneously improve the system’s accuracy and boost its capacity for delivering the answer to the question, “What time is it?”

Boys and girls smile as they hold tablets with bright yellow cases

CeBIT2016: The Kio Kit Is a Classroom in a Box

Editors Note: This week IEEE Spectrum is covering CeBIT, the enormous information and communications technology show that takes place annually in Hanover, Germany. For up-to-the-second updates, you can follow our CeBIT Ninja, Stephen Cass, on Twitter (@stephencass), or catch daily highlights throughout the week here.

Founded in 2013, BRCK is a Kenyan company that makes a rugged router designed for places with poor communications or power infrastructure. It can act as a traditional WiFi access point, but if a wired connection goes out or is simply not available, the router can switch over to cellular networks. Power outages are compensated for by an 8-hour battery. But now the company has gone beyond its basic product with the launch in September 2015 of BRCK Education and its US $5,000 Kio Kit.

The Kio Kit is an almost literal turnkey connected classroom: A water-resistant trunk-sized travel case contains 40 Kio 7-inch tablets and a BRCK router. The travel case wirelessly each charges each tablet, and the whole unit can be charged from either a wall outlet, solar power, or even a car battery. The tablets come pre-loaded with educational software chosen by the purchaser, which can be a mix of free and paid material from providers such as the Kahn Academy or eKitabu, a Kenyan e-book company. Updates can provided through the cloud when connectivity is available.

BRCK’s business development manager, Alex Masika, was at CeBIT to present early results from Kio Kit deployments at the invitation of the German Federal Ministry of Economic Cooperation and Development. Since January, Kio Kits have been sold into schools in Kenya, Tanzania, and the Solomon Islands, with additional orders coming in from Sudan, and queries from many other countries around the globe including the United States.

“The impetus for BRCK Education was the lack of education around the world, with hundreds of millions of kids going without,” says Masika. Educational content was available, but existing set of tools, such as typical consumer-grade tablets, “wasn’t able to address the challenges faced in Africa with power and connectivity,” he adds. Even something as basic as charging multiple mobile devices proved difficult in many schools, so BRCK tried to develop an all-in-one-solution with an emphasis on durability. The tablets are designed to survive a drop of least 70 centimeters, and “we haven’t had report of a single broken screen yet,” says Masika. Other touches—such as color coding the headphones yellow to make them easy to identify when giving instructions—were designed to make the system as hassle-free for teachers as possible.

Masika, who is currently looking for investors and industry partners who can help scale up production and distribution of the Kio Kit, notes that one thing he’d like to see is Kio Kits popping up in places like refugee camps along with other emergency infrastructure like tents. In the meantime, the Nairobi-based Kio Kit and BRCK engineers and designers are continuing to improve the system based on user feedback. 

An orange-colored air-plane-shaped drone with two propellers and two large flaps. It is standing vertically, supported by fins from its wings and tail.

CeBIT 2016: Wingtra Wants To Be Your Hybrid Drone

Editors Note: This week IEEE Spectrum is covering CeBIT, the monster information and communications technology show that takes place annually in Hanover, Germany. For up-to-the-second updates, you can follow our CeBIT Ninja, Stephen Cass, on Twitter (@stephencass), or catch daily highlights throughout the week here.

Quadcopters and other helicopter-style drones can take off and land vertically with pinpoint precision, but they aren’t as fuel efficient or as fast as fixed-wing drones. On the other hand, fixed-wing aircraft normally require either catapults or relatively long runways to get up to speed before taking off. A spin-off company from the Autonomous Systems Lab at ETH Zurich is trying to provide the best of both worlds with its eponymous Wingtra drone.

The Wingtra takes off vertically (it’s held upright on the ground by fins projecting from the wings and tail), then levels out into horizontal flight. For landing, the general process is reversed, but with the assistance of a camera located in the tail. This camera allows the drone to spot a printed target placed on the ground. Once in sight, the Wingtra will autonomously descend to touch down on the target, within about 10 centimeters of bullseye, says Wingtra’s Leoplold Flechsenberger. 

The battery-powered Wingtra can fly for about an hour, during which time it can travel 60 kilometers. There’s no need for continuous control by the operator, as the Wingtra will follow its flight path autonomously. A removable module can carry different payloads: Those looking to inspect railway lines or survey crops for precision agriculture might choose to equip the drone with a high-resolution LIDAR or camera package, for example. Alternatively, an add-on freight module lets the Wingtra carry up to 0.5 kilograms, which Flechsenberger says might prove invaluable in dispatching medical supplies to rural areas.

The drone was designed with simplicity in mind. There are just five primary components: a set of wings combined with the fuselage to form a single body, plus two propellers and two flaps. The drone doesn’t even have a forward-looking camera (although Flechsenberger says one may in added in later versions). The price has yet to be announced, but as the Wingtra is aimed at professional and institutional users, it’s likely to be considerably more than what one would expect for anything aimed at consumers or prosumers. The system is expected to be commercially available in 2017, says Flechsenberger (who adds that Wingtra is hiring to accommodate its rapid expansion). 

null

AlphaGo Wins Final Game In Match Against Champion Go Player

AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country’s Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1.

Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI’s original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come.

The sweeping victory means far more than the US $1 million prize, which Google’s London-based acquisition, DeepMind, says it will give to charity. That’s because AlphaGo, for all its processing power, mainly owes its victory to a radical new way of using that power: via deep neural networks. These networks can train themselves with only a little intervention from human beings, and DeepMind’s researchers had already demonstrated that they can master a wide range of computer video games. The researchers hope that this generalizability can be carried over to the mastering of practical tasks in many other domains, including medicine and robotics.

Game programming began with chess, using methods first sketched out by Claude Shannon and Alan Turing in the 1940s. A machine calculates every possible continuation for each side, working its way as many moves ahead as it can and so generating a tree of analysis with millions of game positions. It then grades the positions by applying rules of thumb that even beginning chess players know, such as the differing values of the various pieces and the importance of controlling the center of the board. Finally, the algorithm traces its way from those end positions back to the current position to find the move that leads to the best outcome, assuming perfect play on both sides.

With modern hardware, this “brute-force” method can produce a strong chess-playing program. Add a grab-bag of tricks to “prune” the analysis tree, throwing out bad lines so the program can explore promising lines more deeply, and you get world-champion-level play. That came in 1997, when IBM’s Deep Blue supercomputer defeated then-World Chess Champion Garry Kasparov. Today you can download a US $100 program that plays even better—on a laptop.

Though some researchers have argued for some time that brute-force searching can in principle conquer Go, the game has long resisted such efforts. Compared to chess, the Chinese game offers far more moves in a given position and far more moves in a typical game, creating an intractably huge tree of analysis. It also lacks reliable rules of thumb for the grading of positions. 

In recent years, many programmers have tried to get around this problem with Monte Carlo simulation, a statistical means of finding the best first move from a vast database of the games that might begin from a given position. That method is also used a bit in AlphaGo, together with the tree-generating methods of yore. But the key improvement is AlphaGo’s use of deep neural networks to recognize patterns.

At a quiet moment, 42 minutes into the streaming of the match’s second game, on 10 March, one of the online commenters, Google’s Thore Graepel, described his first over-the-board encounter with an early form of AlphaGo a year ago—on his first day of work at Deep Mind’s London office. “I thought, neural network, how difficult can it be? It cannot even do reading of positions, it just does pattern recognitions,” Graepel said. “I sat down in front of the board, a small crowd gathered round, and in a small time, my position started to deteriorate… I ended up losing, I tried again and lost again. At least at that point the office knew me, I had a good introduction!”

AlphaGo uses two neural networks, a policy network that was trained on millions of master games with the goal of imitating their play, and a value network, that tries to assign a winning probability to each given position. That way, the machine can focus its efforts on the most promising continuations.  Then comes the tree-searching part, which tries to look many moves ahead.

“One way to think of it is that the policy network provides a guide, suggesting to AlphaGo moves to consider; but AlphaGo can then go on beyond that and come up with a new conclusion that overwhelms the suggestion by the policy network,” explained David Silver, the leader of the AlphaGo team, in online commentary last night, just before the final game. “At every part of the search tree, it’s using the policy network to suggest moves and the value network to evaluate moves. The policy network alone was enough to beat Graepel, an accomplished amateur player, on his first day in the office.”  

A strange consequence of AlphaGo’s division of labor is the way it plays once it thinks it has a clearly winning game. A human player would normally try to win by the largest possible margin, by capturing not just one extra point on the board, but 10 or 20 points, if possible. That way, the human would be likely to win even if he later makes a small mistake. But AlphaGo prefers to win by one point, at what it considers a high probability, over winning, say, by 20 points, at a rather lower probability.

You might think that this tendency to go for the safe-but-slack move is what enabled Lee Sedol to win the fourth game, on Sunday. And indeed, commentators at the time noted that the machine seemed to have the upper hand when Sedol pounced with an unexpected move, after which the machine played some weak moves. Sedol had used up a lot of time on his clock and so had to scramble to make his following moves, but in the end he was able to sustain his advantage and finally win.

However it wasn’t slackness but sheer surprise that caused the problem, members of the DeepMind team said last night, in commentary before thev final game. “That crucial move that Lee found, move 78, was considered very unlikely to be played by a human—[the program] estimated a one in 10,000 chance,” said David Silver, the team leader. “It’s fair to say that AlphaGo was surprised by this move; it had to start replanning at that point.”

A human player faced with a strange-looking move would study it deeply, if there was enough time to do so—and AlphaGo had plenty of time. “But AlphaGo has a simple time-control strategy,” Silver noted. “Maybe that’s something we can work on in future.”

So, it seems, efforts to improve AlphaGo will continue.

“An exciting direction for future research is to consider whether a machine can learn completely by itself, without any human examples, to achieve this level of performance,” Silver said. 

The final goal, of course, is to create an all-around learning machine, one that can learn to do a lot of things that people now get paid to do. Like, say, reporting and writing blog posts like this one. 

null

Zeptojoule Nanomagnetic Switch Measures Fundamental Limit of Computing

No matter how efficient we make our transistors and memory cells, they will always consume a fixed but tiny amount of energy set by the second law of thermodynamics, a new study suggests. Now the question is how close our real-world devices can get to this fundamental value.

The idea that there might be such a universal limit stems from a 1961 paper by Rolf Landauer of IBM. Landauer postulated that any time a bit of information is erased or reset, heat is released. At room temperature, such an irreversible computation—the sort used in today’s computers—will result in the loss of about 3 x 10-21 joules, or 3 zeptojoules, of energy.

In 2012, Eric Lutz, now at the University of Erlangen-Nuremberg, and colleagues demonstrated this limit could be reached by using a laser trap to move the physical location of a 2-micrometer-wide glass bead between two potential wells. 

Now a group led by Jeffrey Bokor at the University of California, Berkeley has shown that this limit also seems to hold for a system that’s of more practical relevance to computing: bits made of nanomagnets. Small magnetic patches are already the staple of hard disks. They also form the basis of the bits inside next-generation nanomagnetic memories like STT-MRAM and are being eyed as a possible form of energy-efficient logic

Read More
null

Darpa Invites Techies to Turn Off-the-Shelf Products Into Weapons in New ‘Improv’ Challenge

The good news is that some of today’s most advanced technologies are cheap and easy to find, both online and on the shelves of major chain stores. That’s also the bad news, according to DARPA. The defense agency is nervous that criminals and terrorists will turn off-the-shelf products into tools and devices to harm citizens or disrupt American military operations.

On Friday, DARPA announced a new project called “Improv” that invites technologists to propose designs for military applications or weaponry built exclusively from commercial software, open source code, and readily available materials. The program’s goal is to demonstrate how easy it is to transform everyday technology into a system or device that threatens national security.

It may seem counterproductive for a federal defense agency to publicly encourage technicians to invent weapons that are easy to replicate. However, John Main, Improv’s program manager, says that exposing the nation’s vulnerabilities before they are laid bare in an attack is a prudent form of security.

“I think you have to assume that potential adversaries are very smart, and if something can be figured out, it will be figured out,” he says. “We are trying to get there first.”

For 58 years, DARPA has funded defense research and consulted with industry experts about looming threats. But an explosion of technology and innovation has made it easier for adversaries to get their hands on sophisticated instruments and tools. The same agency that invented the Internet, GPS, and stealth planes has struggled to anticipate all the ways existing technology can be repurposed to hinder its operations.

Main says his mission with Improv is to create a massive “red” team of innovators to identify these new risks, following the military’s tradition of hiring independent groups to evaluate infrastructure for efficiencies and readiness.

“DARPA’s in the surprise business and part of our goal is to prevent surprise. This particular space is one that is difficult to analyze and we’re trying a different approach to gathering information that will help us understand it,” he says. “It really is more about being proactive than reactive.”

It’s no secret that basic materials can become deadly once combined. Investigators who entered the home of Tashfeen Malik and Syed Rizwan Farook, who killed 14 people in San Bernardino, Calif., last December, found enough supplies to build 20 pipe bombs—including explosive powder, galvanized pipes, and remote controlled cars.

These days, most Americans also have access to smartphones equipped with GPS, cameras, and advanced accelerometers. An amateur pilot can purchase a basic recreational drone for roughly US $600. A 3-D printer is more expensive, but tinkerers can rent them through a makerspace.

DARPA hopes Improv will help it identify new tech-related threats on the horizon. The new challenge is open to technical professionals including engineers, biologists, and information technologists, as well as skilled hobbyists. Applicants can propose an idea through this website. The agency will provide $40,000 in funding to complete a feasibility study for those it deems most alarming.

Once the feasibility studies are complete, the inventors of the most promising ideas will each receive an additional $70,000 to fashion a prototype. The agency says it will pay special attention to proposals that can move from a concept to a prototype in about 90 days. Then, a few prototypes will enter a final evaluation phase with the help of military labs.

The entire program is scheduled to wrap up by the end of 2016. Main says the results may or may not be made public, but promises that DARPA will use them to hone its research aimed at protecting against future threats. 

null

Will The Hyperloop Arise—In Slovakia?

A train faster than any other on earth is being explored by the government of the Slovak Republic and Hyperloop Transportation Technologies (HTT), a Southern California startup.

“Explore” is the restrained word used in the joint press release, which set no deadlines and mentioned no sum of money. Even so, this represents the first solid evidence that anyone might build a large-scale version of the Hyperloop, Elon Musk’s 2013 concept for shooting passenger-carrying pods through partially evacuated tubes at just-barely subsonic speeds.

HTT is building a small, 8-kilometer (5 mile) test track along a highway that runs between Los Angeles and San Francisco. 

The release speculated on several possible routes originating in the capital city of Bratislava, in western Slovakia. One might run the 400 km (250 mile) to Košice, in the eastern part of the country. HTT says the trip would take only 25 minutes at full speed—a bit more than 1,200 km per hour (760 mph). A 70-km westward route to Vienna, Austria, would take 8 minutes. 

“Hyperloop in Europe would cut distances substantially and network cities in unprecedented ways,” said Vazil Hudak, Minister of Economy of the Slovak Republic, in the statement. “A transportation system of this kind would redefine the concept of commuting and boost cross-border cooperation in Europe.”

Travel times depends not only on the top speed of the system but also on the g-forces that passengers can comfortably withstand. The stress comes not only at the beginning and end of a trip but also at every bend along the route. HTT puts the maximum force at 1 g, arguing that “the experience is similar to a performance car.”

Editor’s note: this article has been corrected: Bratislava is indeed west of Košice.

null

AlphaGo Wins Game One Against World Go Champion

Last night Google’s AI AlphaGo won the first in a five-game series against the world’s best Go player, in Seoul, South Korea. The success comes just five months after a slightly less experienced version of the same program became the first machine to defeat any Go professional by winning five games against the European champion.

This victory was far more impressive though because it came at the expense of Lee Sedol, 33, who has dominated the ancient Chinese game for a decade. The European champion, Fan Hui, is ranked only 663rd in the world.

And the machine, by all accounts, played a noticeably stronger game than it did back in October, evidence that it has learned much since then. Describing their research in the journal Nature, AlphaGo’s programmers insist that it now studies mostly on its own, tuning its deep neural networks by playing millions of games against itself.

The object of Go is to surround and capture territory on a 19-by-19 board; each player alternates to place a lozenge-shaped white or black piece, called a stone, on the intersections of the lines. Unlike in chess, the player of the black stones moves first.

The neural networks judge the position, and do so well enough to play a good game. But AlphaGo rises one level further by yoking its networks to a system that generates a “tree” of analysis that represents the many branching possibilities that the game might follow. Because so many moves are possible the branches quickly become an impenetrable thicket, one reason why Go programmers haven’t had the same success as chess programmers when using this “brute force” method alone. Chess has a far lower branching factor than Go.

It seems that AlphaGo’s self-improving capability largely explains its quick rise to world mastery. By contrast, chess programs’ brute-force methods required endless fine-tuning by engineers working together with chess masters. That partly explains why programs took nine years to progress from the first defeat of a grandmaster in a single game, back in 1988, to defeating then World Champion Garry Kasparov, in a six-game match, in 1997.

Even that crowning achievement—garnered with worldwide acclaim by IBM’s Deep Blue machine—came only on the second attempt. The previous year Deep Blue had managed to win only one game in the match—the first. Kasparov then exploited weaknesses he’d spotted in the computer’s game to win three and draw four subsequent games.

Sedol appears to face longer odds of staging a comeback. Unlike Deep Blue, AlphaGo can play numerous games against itself during the 24 hours until Game Two (to be streamed live tonight at 11 pm EST, 4 am GMT). The machine can study ceaselessly, unclouded by worry, ambition, fear, or hope.

Sedol, the king of the Go world, must spend much of his time sleeping—if he can. Uneasy lies the head that wears a crown.

null

Scientists Flip Switch on Genes With a Magnet

Matching the brain’s machinery to behaviors and emotions was risky business throughout much of medical history. It was achievable, more or less, only through clumsy techniques such as lobotomies. Examiners who removed chunks of the brain could observe the surgery’s effects, but patients had to live with the results.

The rise of optogenetics, in which light in the form of lasers is used to manipulate individual neurons, has improved the situation slightly. But this technique still works only in regions of the brain where it’s easy to shine light. Neuroscientists must physically insert a fiber optic cable to study anything that isn’t easily accessible.

Now a team from the University of Virginia has shown that it’s possible to use a magnet to control neurons embedded deep in the brains of mice. This technique could offer a non-invasive alternative to optogenetics and aid researchers eager to understand the underpinnings of emotions or more clearly identify the origins of cognitive disorders.

By using this technique to flip genes on and off, researchers could trace neural circuits and determine which behaviors or feelings are affiliated with specific pathways in the brain. Ali Guler, a biochemist, led the group that published the results of this proof-of-concept research in the 7 March edition of Nature Neuroscience.

Guler, aware of the limits of other examination methods, wanted to find a way to remotely control neurons. His idea was to create a genetic analog to techniques used to alter cellular functions. If simple adjustments to calcium ion channels can change important processes such as muscle contraction and hormone secretion, he reasoned, why can’t we manipulate hard-to-reach areas of the brain, but with genetic switching as the trigger?

With this strategy in mind, he created a tool that linked the gene for a protein called TRPV4 (which serves as a gatekeeper for ion channels) with a gene for an iron-fixing protein called ferritin. Connecting the two genes in this way enabled his team to tug the ion channels open or push them closed simply by moving the nearby iron with a magnet.

"It's essentially a biological nanomagnet,” Guler says. He dubbed the creation “Magneto” for the Marvel comic book character capable of generating magnetic fields at will.

In one experiment, the Virginia researchers inserted the specially-designed Magneto genes in a virus, which acted as the transport medium to ferry the magnetic field–susceptible gene product to the striata of six mice. The aim: to see if they could switch ion channels open and closed in a way that might mimic the pleasurable effects of dopamine. The striatum, which processes rewards, is buried beneath the wrinkly bulk of the forebrain and has proven difficult to reach by other methods. If the technique worked, they figured, they could use Magneto in other parts of the brain to mimic different hormones and neurotransmitters. Six other mice formed the control group.

The researchers put all 12 mice into a chamber that was magnetized at one end. Their hypothesis: that the Magneto-carrying mice would scramble for the magnetized side because the open ion channels in their striata would give them a dopamine-like rush of pleasure. Indeed, they found that all six of the Magneto mice preferred to spend their time on the magnetized side of the chamber while all but one of the control mice kept to the non-magnetized end.

When Guler measured the rate at which the mice’s neurons fired, he found that the neurons in Magneto mice at the magnetized end fired more frequently than those in the untreated mice—an effect he would expect to see with true dopamine.

In the future, Guler says, this technique could be used to map neural pathways, tinker with behaviors, and compare neurons in different parts of the brain. “Similar to the optogenetic strategies, you can manipulate any group of neurons that you would like to control,” he says.

If that sounds eerie, rest assured that this power will stay confined to the lab for the time being. The synthetic genes that responded to Guler’s magnet were specially designed and built for this purpose. Magnets would not have the same effect on normal neurons in mice or people.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More