Tech Talk iconTech Talk

How Much Power Will Quantum Computing Need?

Google’s Quantum AI Lab has installed the latest generation of what D-Wave Systems describes as the world’s first commercial quantum computers. Quantum computing could potentially solve certain problems much faster than today’s classical computers while using comparatively less power to perform the calculations. Yet the energy efficiency of quantum computing still remains a mystery.

For now, D-Wave’s machines can scale up the number of quantum bits (qubits) they use without significantly increasing their power requirements. That’s because D-Wave’s quantum computing hardware relies on a specialized design consisting of metal niobium loops that act as superconductors when chilled to a frigid 15 millikelvin (-273°  C). Much of the D-Wave hardware’s power consumption—slightly less than 25 kilowatts for the latest machine—goes toward running the refrigeration unit that keeps the quantum processor cool. The quantum processor itself requires a comparative pittance.

Read More

David DiVincenzo on his Tenure at IBM and the Future of Quantum Computing

Theoretical physicist David DiVincenzo is widely viewed as one of the pioneers of quantum computing. He authored a 1996 paper (PDF) outlining five criteria he predicted would make quantum computing a reality; it has become a de facto roadmap for most of the research in quantum computing since then. In 1998, with Daniel Loss, he proposed using electron spins for storing data as qubits in quantum dots, which might prove to be the best choice for creating a working quantum computer.

In 2010, DiVincenzo was invited by Germany's Alexander von Humboldt Foundation to become Director of the Institute of Theoretical Nanoelectronics at the Peter Grünberg Institute in Jülich, and a professor at the Institute for Quantum Information of RWTH Aachen University. Previously he was a research director at the IBM T.J. Watson Research Center in Yorktown Heights, N.Y.

We met DiVincenzo in his Spartan office at the Physikzentrum of RWTH Aachen University, which is located “ten minutes by bicycle” from The Netherlands, where DiVincenzo has made his home.

IEEE Spectrum: You turned to investigating quantum computing while working as a theoretical physicist at IBM. What caught your interest?

DiVincenzo: I became interested in around 1993. It was not very much of a field at that time, but it was a field. There were two very eminent IBM scientists who were already involved for much longer: Rolf Landauer and Charles Bennett. Landauer is remembered for his contributions to the fundamental understanding of computing. Questions like what is the minimum amount of energy required to perform computational processes.

Landauer was quite important in the original discussions of quantum computing because he provided a skeptical point of view.  He thought that the sensitivity to imperfections in matter would be devastating for quantum computing. But he was interested in the concept of error correction that arose at that time and that could be applied to quantum computing.  And this really turned the story around. Bennett was famous for introducing the ideas of quantum physics into information science and cryptography. In 1993, he worked on what is now known as quantum teleportation.

I was fascinated by these developments, and at that time, IBM was flexible enough so that I could just jump in. I started contributing various ideas, and the following year, the Shor Factoring Algorithm was discovered. This made it clear that quantum computing could be done.

Spectrum: So, the research culture at IBM was definitely an important factor in your research career.

DiVincenzo: I would say that the research culture at IBM was always distinct. There was a whole evolution over the decades. For years it was thinking of itself in relation to Bell Labs: Are we as famous as Bell Labs?  That’s the history of the 1970s. I joined the lab in the 1980s; I had many friends that were there from the beginning, and I think I had a feeling for what the culture was like. IBM tried to really build up its research in the 1960s. In that period, they were definitely looking at themselves hoping to be another Bell Labs, which was in its heyday. By the 1980s, I felt that at that point they really did not have to worry whether they [were turning out] science of a comparable quality as Bell Labs. The cultures were similar; they would take rather young scientists and immediately give them all the resources of an institute, basically, without any of the responsibilities. This is a fantastic model which has proven to be not so sustainable—at least not in the corporate world. [And beginning in the early 1990s, it wasn’t really sustainable within IBM.]

Spectrum: How did this change affect your work at IBM?

DiVinzenzo: IBM had a heavy financial crisis in 1993, its most severe one.  It had a moment when it was really questioning its whole business model and whether it should be broken up into smaller companies. IBM undertook a whole sequence of different steps, such as getting out of personal computers,  and each one made it appear that physics had become less relevant to IBM.  The physics department got much smaller that year, and I remained in that smaller department. But we had a fantastic time after that in quantum computing.

Spectrum: So at least the research culture at IBM survived.

DiVincenzo: Here I would say something about the culture of IBM versus Bell. Bell evolved into a very competitive internal culture. People were really knocking against each other. Internal seminars were quite an ordeal because you were subjected to really heavy scrutiny. Internal dealings among scientists at IBM were much more congenial. 

The rest of physics at IBM was suffering. IBM still has a physics department, but at this point almost every physicist is somehow linked to a product plan or customer plan. At IBM, right from the beginning, there was always hope that these physicists who were dreaming up interesting things could actually contribute. Research became more directed as time went on.

Spectrum: It is now five years since you joined two German universities. How would you compare the life of a researcher in Germany—a country traditionally known for its emphasis on academic freedom ever since the 19th century? Do researchers in Germany have the same freedom that researchers had at IBM during the 1960s?

DiVincenzoNo. But I think they are freer and have more flexibility than what you find in the U.S. academic culture. Of course, in the U.S., there is heavy attention given to third-party funding. In Germany, this is not completely true. If you have a chair, you actually do have fixed resources that go with the chair, which is not the case in the U.S. But it is typically not enough to do any major project, and it shrinks over time, so you should connect yourself with some third-party funding. However, there is here a pretty strong long-term consensus that we don't tinker with science funding too much.

Spectrum:  Basically, future quantum computers might be based on qubits of two types: atoms or ions suspended by laser beams, and ions or electrons trapped in matter, such as in defects or in quantum dots. Which will prevail?

DiVincenzoWe're close enough to the quantum computer that we kind of can foresee its complexity in a classical sense—that is, how much instrumentation is required for this to work as a quantum computer.  I think that systems that involve lasers add a really big  jump in complexity because they would require a laser system for every qubit. Say we need a million qubits; we will have a system with a complexity well beyond anything that has ever been done in optical science. 

Now, for example, with quantum dots there are no lasers in sight.  Everything is done with electronics and at gigahertz frequencies.  You will need controller devices containing transistors that work at 4 Kelvin. That is an interesting challenge. It turns out that some conventional transistors, right out of the foundry, do work at 4 Kelvin. This becomes the beginning of some real scientific research: How do you make a transistor that really functions in a good way at 4 Kelvin without dissipating too much energy? Energy dissipated in the instrumentation close to the quantum system at that temperature could be one of a whole set of challenges that will not be solved easily. My own personal view is that we're a decade or so away from some really new machines that at least will partially fulfill what we've been thinking about since 1993.  

Cybersecurity System IDs Malware Hidden in Short Twitter Links

Twitter and Facebook users can all too easily get a computer virus when they click on malware links shared by unsuspecting friends. To identify such malicious links on social media, UK researchers have developed a system that recognizes potential cyber attacks within seconds of clicking on a shortened Twitter link.

Read More

Goodbye MagStripe, Hello Chip Cards

Developed at IBM in the 1960s, rolled out in the 70s, caught on globally in the 80s, ubiquitous in the 90s, and now stepping aside—the magnetic stripe card has had a brilliant career.

But now the magnetic stripe card is going into forced retirement, replaced by the chip card after just too many security breaches that cost banks and retailers far too much money.

You probably already received chip card replacements for the mag stripe cards in your wallet; if not, you will soon. And retailers have been busily replacing their card readers—if they haven’t, as of today, 1 October, card-issuing banks will no longer eat the costs of fraud; it will be on the merchant.

Magnetic stripe cards are juicy targets for criminals. They contain information about the customer encoded on the stripe; if criminals get this information by skimming it using an altered card reader or hacking into a retailer’s network, they can use it to make counterfeit cards or sell it online; these fake cards will work until the breach is detected and the card numbers changed.

A safer alternative are chip cards, which communicate with the card readers to create unique data every time they are used. Even if a hacker managed to somehow grab that data and create a counterfeit card, it would be useless for future transactions. And they can be used in combination with a PIN, instead of just a signature, making it harder for even thieves who steal physical cards to use them.

Ironically, the biggest complaint about chip cards, which have to be inserted into a card reader and left there until the payment is complete, is that they are causing the same problem that mag-stripe technology was created to solve—lines that back up when a lot of transactions need to be handled in a short period of time. Back in 1967, airlines had ordered the first wide-body aircraft, and saw that they’d soon be dealing with far more customers arriving at check-in counters than they were used to. And retailers were finding that more and more customers were paying with credit cards, turning filling out charge slips by hand into more and more of a burden. They needed a technology to make identifying customers and making payments go faster.

You’ve likely already noticed that chip card transactions are taking longer than swipe cards—part of it is unfamiliarity, as you fumble to figure out where to put the card into the reader and perhaps have to restart a transaction you interrupted by removing the card too soon. But part of it is the technology—it takes a little longer for the system to verify the transaction with the issuing bank and to create the unique transaction code than it did to complete a magnetic stripe card transaction.

This is one reason  Jerome Svigals, who led the development of magnetic stripe technology at IBM, predicted that chip cards won’t have anything near the long career of their predecessor, and will soon be supplanted by mobile phone triggered payments. Indeed, many of these new readers installed by retailers to handle chip cards also include the capability to communicate with smart phones.

For more on the story behind the development of the magnetic stripe card and the reasons for its dominance for so many decades, see “The Long Life and Imminent Death of the Mag-Stripe Card” and “The Mag-Stripe Era Ends.”

Neural Implant Enables Paralyzed ALS Patient to Type Six Words per Minute

Typing six words per minute may not sound very impressive. But for paralyzed people typing via a brain-computer interface (BCI), it’s a new world record. 

To pull off this feat, two paralyzed people used prosthetics implanted in their brains to control computer cursors with unprecedented accuracy and speed. The experiment, reported today in Nature Medicine, was the latest from a team testing a neural system called BrainGate2. While this implant is only approved for experiments right now, researchers say this demonstration proves that such technology can be truly useful to quadriplegics, and points the way toward regular at-home use.

The two people who volunteered for this study have amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, a degenerative neural disorder that leads to complete paralysis. Lead researcher Jaimie Henderson, co-director of Stanford’s Neural Prosthetics Translational Lab, calls it a “humbling experience” to work with quadriplegic patients who willingly undergo brain surgery and devote themselves to science experiments that will push forward this early-stage technology. “They’ve become true partners with us in this endeavor,” Henderson says. 

The BrainGate2 system consists of an array of minuscule electrodes implanted, in this case, in a region of the motor cortex known as the “hand knob.” The electrodes record the patterns of electrical activity in the neurons there, which fire when the person either moves or imagines moving their hand. The BrainGate2 system also includes decoding software, which turns a messy signal into a clear command for an external device—in this case, a computer cursor. Other experiments have used BCIs to control robotic arms, and they could theoretically be used to control wheelchairs, cars, or anything else that can be moved by remote control. 

In this study’s first task, the participants repeatedly moved their cursors to targets on a computer screen (see video below), which they accomplished by imagining their index fingers moving on computer trackpads. They each averaged about 2.5 seconds per target. This is a significant improvement over a previous BrainGate2 trial, in which a different patient performed the same task but averaged 8.5 seconds per target. 

The improvement, Henderson says, came from four factors. 

1) The system architecture provided faster processing than before. With a lag time of only about 20 milliseconds between the user’s thought and the cursor’s action, the participants got useful feedback while doing the task. 

2) Signal processing filters carefully extracted the neural signals from the ambient electromagnetic noise—a necessity, as these experiments were conducted in the volunteers’ homes. 

3) The imagined motion that the participants ultimately used to control the cursor (an index finger moving on a trackpad) provided a clearer neural signal than other imagined motions they tried out (whole arm and wrist movements).

​4) Perhaps most importantly, an improved decoding algorithm was better able to translate neural signals into intended movements. Essentially, it was better able to identify the direction the user intended to steer the cursor, and could therefore correct for deviations in the neural signal that would have otherwise steered the cursor off track. 

But what about the typing, you ask? For that task, the participants used the same imagined finger movement to pick out letters in a text-entering program called Dasher. With this interface, once the user selects a letter, the program predicts which letters are likely to come next and makes them easier to select, speeding up the construction of words. 

One of the participants typed 115 words in 19 minutes, or about 6 words per minute. That user had previous experience with the Dasher interface using a different control method, but it’s still a pretty impressive result. While this participant is still able to talk naturally, such a communication method could benefit people who have lost the ability to control their mouth muscles, such as people with more advanced ALS or “locked-in” patients. 

Henderson and his colleagues have previously surveyed people with paralysis to see whether they’d be eager to adopt BCI technologies in their everyday lives, and what capabilities they’d hope to gain from such gear. High on the wish-list was the ability to communicate easily through fast typing, which the survey defined as 40 words per minute.

Henderson says he has plenty of ideas for how to reach that ambitious target. A future study might make use of electrodes implanted in a region of the brain that encodes a person’s intentions to move, before they actually make a motion. “We want to see if using those signals from the planning part of the brain helps improve performance,” he says.

It’s not clear what level of performance will be required before an implanted BCI device is considered ready for domestic use. But Henderson thinks the BrainGate2 system is well on the way: “We think we’re making very good progress,” he says.   

Hajj Pilgrimage Safety Challenges Crowd Simulator Technology

A crowd stampede that caused at least 769 deaths at the annual Muslim pilgrimage to Mecca provided a stark reminder of the safety challenges at the most congested public space in the world. The millions of pilgrims who visit Mecca and other holy sites in Saudi Arabia each year often crowd into spaces as densely packed as six people per square meter. Such crowd densities present a huge challenge for both public safety authorities and computer simulation software designed to help model and predict crowd disasters.

Read More

Oxynitride Thin-film Transistors: Faster Screens with Faster Electrons

Currently, LCD HDTV screens operate according to ATC Standards, with 1080 gate lines and refresh rates up to 30 frames per second.  Going to higher definition, such as 4000 or 8000 lines and refresh rates up to 240 frames per second, is currently out of reach, both for OLED and LCD screens, because the pixels are driven by thin-film transistors that are simply too slow. Typically, their electron mobility is below 20 square centimeters per volt seconds (cm2/vs). “There is not much time to supply the voltage or the current through the transistor for a high frame rate and high definition display,” says Sanghun Jeon (PDF), a researcher at Korea University in Sejong. “We estimate that for future display technology, the mobility should be exceeding 100 cm2/vs.” Jeon and other researchers at Korea University and at the Samsung Advanced Institute of Technology in Gyeonggi-do, Korea, report this week in Applied Physics Letters the creation of just such a transistor. The one they developed has an electron mobility of 138 cm2/vs.

The transistor consists of a thin film of zinc oxynitride, ZnON, a glassy composite of ZnO, ZnOxNx  and Zn3N2. Using sputtering, they deposited a mixture of nitrogen, oxygen, and argon gases onto a zinc target, forming a film 50 nanometers thick. However, because nitrogen bonds weakly with zinc, as compared with oxygen, the layer was susceptible to oxidation when exposed to air. To prevent this, the researchers treated the layer with a high-energy plasma of argon ions. These ions, when they collided with the atoms in the ZnON, rearranged the bonds between atoms, just like shaking a box with colored marbles will result in a redistribution of colors. The process resulted in a chemically uniform layer, more resistant to chemical degradation. 

The researchers made scanning transmission electron microscope (STEM) images of an annealed thin film obtained by current production processes with their new experimental thin film. “The STEM image of argon-plasma-treated ZnON material shows uniform contrast throughout the entire film, indicating that the composition of argon-plasma-treated ZnON film is very uniform,” says Jeon. And it is this uniformity that is at the root of the high electron mobility in the ZnON film, explains Jeon, who notes that defects at the interfaces and in the bulk of such semiconductors slow electrons down a lot.

The argon plasma process augurs well for future large scale production, especially because ZnON is known to have poor reproducibility. “We were  surprised that by employing the argon process, we were able to reproduce the device well, together with a high mobility constantly exceeding 100 cm2/vs," says Jeon. However, more work is required before the thin-film transistors will control pixels on actual screens. Getting to that point will take two or three more years, says Jeon. 

Navy Diversifies Ships' Cyber Systems to Foil Hackers

Cyber attacks could prove just as deadly to technologically advanced warships as missiles and torpedoes in the future. That is why the U.S. Navy has been developing a defense system to protect its ships against hackers who threaten to disable or take control of critical shipboard systems.

The Resilient Hull, Mechanical, and Electrical Security (RHIMES) system aims to prevent cyber attackers from compromising the programmable logic controllers that connect a ship’s computers with onboard physical systems. RHIMES uses slightly different versions of core programming for each physical controller so that a cyber attack can’t disable or take over all shipboard systems in one fell swoop.

“In the event of a cyber attack, RHIMES makes it so that a different hack is required to exploit each controller,” said Ryan Craven, a program officer of the Cyber Security and Complex Software Systems Program in the Office of Naval Research, in a press release.“The same exact exploit can’t be used against more than one controller.”

That seemingly basic precaution could go a long way toward protecting crucial warship systems such as damage control and firefighting, electric power, steering and engine control. The loss of one or more such systems could prove especially devastating in the middle of a naval operation or battle; especially if hackers turn the ship’s systems against itself.

The threat of cyber attacks crippling or taking over large physical systems has already been proven in recent years. Stuxnet, the “computer worm” developed by the United States and Israel, attacked Iran’s nuclear program by compromising the physical controllers of Iranian centrifuges and running them at high speeds to damage the equipment. (A similar Stuxnet-style effort aimed at North Korea’s nuclear program failed because it couldn’t access the crucial computers.)

“Another powerful example is the hacking of a German steel mill in 2014,” Craven explained. “The hackers reportedly got in and overheated a blast furnace, and even made it so that the plant workers couldn’t properly shut down the furnace, causing massive damage to the system.”

The Navy’s RHIMES approach to cybersecurity could also pay off outside of warships. A similar strategy might help safeguard the physical controllers found in cars, aircraft, and factories. That could work in tandem with complementary defenses such as air-gapped systems isolated from networks or adding analog systems and humans into the loop as safeguards.

3-D Printing Software Turns Heart Scans into Surgical Models

human os icon

A new 3-D printing system can transform medical scans of a patient’s heart into a physical models that help plan surgeries. The efficient system relies on a computer algorithm that requires just a pinch of human guidance to figure out a patient’s heart structure from MRI scans.

Read More

U.S. 'Master Clock' Keepers Test Terrestrial Alternative to GPS

GPS technology can do much more than guide drivers and smartphone users on unfamiliar streets. The Global Positioning System’s satellites carry expensive atomic clocks that also provide synchronized timekeeping for cell phone networks, major financial institutions, and power grids across the world. But a report by the U.S. government’s “master clock” keepers finds that a ground-based “TimeLoc” technology can provide even better timekeeping accuracy within crowded cities and indoor spaces—places where GPS signals have trouble reaching.

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More