An international team of researchers has developed a photovoltaic cell based on a combination of magnetic electrodes and C60 fullerenes— sometimes referred to as Buckyballs—that increases the photovoltaic efficiency of their device by 14 percent over photovoltaics using ordinary materials and architecture.
In research described in the journal Science, scientists from China, Germany, and Spain have taken spin valves—devices based on giant magnetoresistance and used in magnetic memory and sensors—and combined them with photovoltaic materials. The result offers a new way for solar cells to convert light into electricity.
“The device is simply a photovoltaic cell,” says Luis Hueso, research professor and leader of the Nanodevices Group at CIC nanoGUNE in Spain, in an e-mail interview with IEEE Spectrum. “However, we are using magnetic electrodes (cobalt and nickel-iron) rather than standard indium tin oxide (ITO) and aluminum as commonly used in organic photovoltaics.” The magnetic electrodes provide electrons with a certain orientation of their spin, creating what’s called a spin polarized current. Using these electrodes increased the photovoltaic efficiency by 14 percent compared to using ordinary electrodes, he says.
Silicon has been the mainstay of chips for much of their history (a history you can explore in IEEE Spectrum’sChip Hall of Fame). This is in large part because silicon possesses a “Goldilocks” band gap of 1.1 electron Volts (eV), which makes it possible to operate integrated circuits at a low voltage, leading to reduced leakage of current.
Another key feature of silicon is that it can be used to make a convenient “native” insulator, in the form of silicon oxide. Silicon oxide managed to serve as an insulator for silicon circuits for many generations of chips, isolating components and reducing gate leakage currents, until high-K dielectrics took over the job a decade ago.
Now researchers at Stanford University and SLAC National Accelerator Laboratory have found that some of the most sought after high-K materials—namely hafnium selenide (HfSe2) and zirconium selenide (ZrSe2)—possess the same perfect band gap seen in silicon when they are thinned down to two-dimensional (2D) materials. As a result, the Stanford researchers have discovered a 2D material version of the handy silicon/silicon dioxide combination that enabled generations of chip designs. But in this case the combination can be shrunk down ten times smaller.
Researchers have built a true random number generator that they say could improve the security of printed and flexible electronics. They made it from a static random-access memory cell printed with a special ink containing carbon nanotubes. The memory cell uses fluctuations in thermal noise to generate random bits.
Generating random numbers within an electronic device is critically important because random numbers are the basis for encryption keys that keep personal devices secure. Many electronics contain hardware components designed for this exact purpose.
It’s also possible to generate random numbers through software. But software-based random number generators are considered “pseudorandom.” They start with an original number, or seed, and apply a mathematical equation to generate a string. The resulting pattern is not entirely random, and hackers can replicate it if they figure out the seed.
Hardware-based “true” random number generators are therefore considered the gold standard for security, but they can be bulky, rigid, and expensive to manufacture. Oftentimes, they rely on lasers and photon detectors to produce random bits based on physical phenomena that cannot be predicted. That means they’re not a great fit for flexible and printed electronics, which lag behind other gadgets in security.
Mark Hersam, an expert in nanomaterials at Northwestern University, says the new carbon nanotube generator that his team built could be integrated into wearables, tiny sensors and tags, disposable labels on products like milk cartons, or even smart clothing. It could also be printed directly onto packaging with standard inkjet printers to encrypt data or validate that products have not been tampered with.
To develop their generator, Hersam’s group started with semiconducting single-walled carbon nanotubes, which became an early crowd favorite, along with graphene, in scientists’ ongoing search for novel semiconducting materials. Silicon transistors are approaching the end of their charmed existence under Moore’s Law; one recent report suggests that could come as soon as 2021.
Hersam’s group used a nanotube solution—a type of ink that contains high purity semiconducting carbon nanotubes—to create a static random-access memory (SRAM) cell that generates random bits. Printing SRAM cells with nanotube ink (which one company boldly markets as “Nink”) is a relatively inexpensive process that Hersam says could turn out large volumes for consumer electronics.
However, Mario Stipcevic, a researcher at Ruđer Bošković Institute in Croatia, points out that quantum random number generators, which he has worked on in the past, can already fit into an area as small as 20 by 20 micrometers, which is tiny enough to sew into clothing or squeeze into a connected device.
Once Hersam’s team had printed their SRAM cell, they needed to actually generate a string of random bits with it. To do this, they exploited a pair of inverters found in every SRAM cell. During normal functioning, the job of an inverter is to flip any input it is given to be the opposite, so from 0 to 1, or from 1 to 0.
Typically, two inverters are lined up so the results of the first inverter are fed into the second. So, if the first inverter flips a 0 into a 1, the second inverter would take that result and flip it back into a 0. To manipulate this process, Hersam’s group shut off power to the inverters and applied external voltages to force the inverters to both record 1s.
Then, as soon as the SRAM cell was powered again and the external voltages were turned off, one inverter randomly switched its digit to be opposite its twin again. “In other words, we put [the inverter] in a state where it's going to want to flip to either a 1 or 0,” Hersam says.
Under these conditions, Hersam’s group had no control over the actual nature of this switch, such as which inverter would flip, and whether that inverter would represent a 1 or a 0 when it did. Those factors hinged on a phenomenon thought to be truly random—fluctuations in thermal noise, which is a type of atomic jitter intrinsic to circuits.
There’s no known way to predict this noise, and the amount and type of noise at a particular moment determines which bit the inverters will spit out. The first bit that results from this process then becomes the first number in a sequence. “If we keep resetting the cell and have the thermal noise force it to take a stand, the series of bits that come out will be a random strand of 1s and 0s,” Hersam says. His team recently described their work in the journal Nano Letters.
Using this method, which is inspired by the technique that Intel uses to generate random numbers in silicon, it’s possible to generate a string of random bits from a single cell, or run multiple cells in parallel to produce a string more rapidly. Hersam’s team did not optimize for speed in their trials, so the SRAM cell they printed generated only a few bits per second.
To Stipcevic, that speed (or lack thereof) is a major problem. But Hersam is confident his team can hasten the process. “I think a million-fold improvement is there for the taking once we've actually optimized for that parameter,” he says.
Overall, his group has produced 61,411 bits with its generator. To evaluate its randomness, they divided that stream into 56 smaller sequences of 1,096 bits, and put those sequences through statistical tests created by the National Institute of Standards and Technology (NIST) to determine true randomness. The generator passed nine of those tests.
Stipcevic points out that NIST offers a total of 15 randomness tests, and that many researchers use strings with a million or more bits to complete them. Hersam says his team consulted with NIST representatives to choose the tests that would deliver meaningful results based on the number of bits they’d collected.
Stipcevic raises other concerns with Hersam’s work. He says the research group can’t really say whether thermal noise is truly random, or if the nanotube generator would operate successfully if repeatedly bent in a flexible device, such as a wearable. And Stipcevic says it would be important for the group to guarantee that all charge was removed from the inverters between each run to avoid creating a nonrandom pattern.
Hersam says his team avoided any leftover charge by increasing the time that each SRAM cell spent in its resetting phase and adds that, “the measurement and quantification of thermal noise as a source of randomness is a well-documented phenomenon.”
The idea of substituting electrons with photons in computing has led to a variety of approaches for achieving the promise of speed-of-light computing. Not many of these schemes, however, have involved devices in which electronic currents are optically switched and amplified only by light and without the need for an electronic gate.
Now a team of researchers at Korea University has jumped into this largely untouched field with a nanowire-based transistor in which photons control nanowire logic gates. The researchers—who have dubbed their device a photon-triggered nanowire transistor (PTNT)—believe that their results show a way forward in using photons in logic gates, leading to ultracompact nanoprocessors and nanoscale photodetectors for high-resolution imaging.
The process for forming superlattices, which are structures made of aligned, alternating layers of nanomaterials, has been around for decades. It has typically taken days or weeks for them to self-assemble into their final structures. However, last year, researchers at SLAC National Accelerator Laboratory and Stanford observed that these structures could form much more rapidly.
Just before I left the JCAP facilities at Berkeley Labs, I was ferried over to meet with Haimei Zheng, a staff scientist in Berkeley Lab's Materials Sciences Division. Zheng claimed that she and her colleagues had completed—but had not yet published—research in which they had managed to crack the big problem of byproduct selectivity in CO2 reduction. The main issue with carbon dioxide reduction is that it usually produces a soup of different products when what you really want is a specific fuel, like ethanol.
Now the research she told me about has been published and is described in the journal Science Advances. The team’s work promises 100-percent selectivity in carbon monoxide production.
Researchers at the U.S. Department of Energy’s SLAC National Accelerator Laboratory have leveraged a long-used microscopy tool to investigate why perovskites, the “wonder materials” in photovoltaics, have proven to be so efficient at converting light into electricity. These studies have revealed for the first time that light essentially spins the atoms inside of perovskites into a whirl, offering new clues into how scientists can make these materials even more efficient at converting light into electricity.
In research described in the journal Science Advances, the SLAC scientists employed the decades-old microscopy technique known as ultrafast electron diffraction (UED) to create a kind of movie of how atoms in perovskites respond within a trillionth of a second of being hit by short pulses of light.
Last month, IEEE Spectrum ran a special report focusing on the question “Can We Copy the Brain?” The report offered a thorough examination of all the ongoing efforts in duplicating the human brain both in terms of hardware and software.
Among the areas covered was neuromorphic chips that mimic the neurons in the brain. According to the leading practitioners in the field, neuromorphic systems do exist today, but remain pretty far from the point where they can outperform more traditional computing schemes.
Now an international team of scientists from France, the United States and Japan has zeroed in on the non-linear oscillations of human neurons that they believe will bring the capabilities of artificial neurons much closer to the ones in our heads. The results, they say, could lead to miniature neuromorphic chips capable of learning and adapting to a range of applications.
Non-linear oscillators transform a constant input into an oscillation. For example, pendulum clocks are non-linear oscillators. Neurons are also non-linear oscillators: if you excite them with a constant current, they will emit voltage spikes periodically.
In research described in journalNature, researchers at the National Centers of Scientific Research and Thales (CNRS-Thales) in France together with scientists at the U.S. National Institute of Standards and Technology (NIST) and in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) looked at the problem of shrinking artificial non-linear oscillators down to the point where 108 oscillators could fit onto a two-dimensional array inside a chip the size of a thumb.
While nanoscale devices would seem to be the obvious choice, these devices come with their own set of problems: They create a lot of noise and lack the stability that is a prerequisite for processing data reliably. Yet they spurned other suggested approaches based on memristive or superconducting devices. The scientists opted for a nanoscale spintronic oscillator, made up of magnetic tunnel junctions which form the backbone of read heads in giant magnetoresistance (GMR) hard disk drives.
“Magnetic oscillators have very stable properties compared with memristive oscillators,” said Julie Grollier, a research director at CNRS and co-author of the research paper, in an e-mail with IEEE Spectrum. “This is due to their cyclability. A magnetic tunnel junction has almost infinite endurance, whereas a memristor starts degrading after a million cycles.”
Grollier added that magnetic oscillations are much easier to measure than superconductive pulses. They occur at room temperature, and the emitted voltage is typically 100 millivolts, which is orders of magnitude larger than those of Josephson junctions, which are sandwiches of superconducting material and insulator.
The nanoscale spintronic oscillators are pillars composed of two ferromagnetic layers separated by a non-magnetic spacer. When charge currents flow through these junctions, they become spin-polarized and generate torque on the magnetizations. This leads to a phenomenon known as magnetization precession, which occurs when atoms with unpaired electron spins are placed in a magnetic field and rotate around the magnetic field at a precise frequency. This sustained magnetization precession produces frequencies from hundreds of megahertz to several tens of gigahertz. The magnetization oscillations are then converted into voltage oscillations through magnetoresistance. The resulting radio-frequency oscillations, of up to tens of millivolts, can be detected by measuring the voltage across the junction.
“We used our magnetic nano-neuron to emulate a full network of 400 neurons thanks to a strategy called time multiplexing,” said Grollier. “The magnetic pillar plays the role of each neuron one after the other, just like an actor who plays all the characters in a movie would.”
To test their system, the researchers attempted to use it for voice recognition. They converted audio signals so that they would be recognized as an electrical current, then sent the current through the nano-neuron. These electrical waveforms—accelerated a thousand times—induced oscillations of the magnetization in the stacked nano-magnets, which started to gyrate like a compass. The magnetic oscillations were converted into oscillations of the voltage across the neuron through the effect of magnetoresistance. They recorded these voltage changes with an oscilloscope, and the synaptic functions were then emulated with a computer, allowing the neural network to learn.
“The neural network that we have realized recognizes spoken digits pronounced by different speakers with a success rate of 99.6 percent—which is as good and sometimes even better than with neurons with an area ten thousand time larger,” said Grollier. “The results demonstrate the ability of the magnetic nano-neuron to achieve cognitive tasks in a reliable way.”
The magnetic nano-neurons have a structure identical to magnetic memory cells, which are already fabricated by the hundreds of millions on silicon. In the next years, the researchers aim to interconnect these neurons densely and control their coupling in order to build large networks capable of complex information processing.
The ultimate goal is to realize smart, low-power miniature chips capable of learning and adaptation to the ever-changing and ambiguous conditions of the real world, says Grollier. “These chips can be useful for many applications, including classifying huge amounts of data in real time, driving autonomous vehicles or medical diagnosis and prosthesis,” she added.
Now Microsoft is turning its attention to the other half of DNA computing, the processor. Researchers at Microsoft have teamed up with scientists at the University of Washington to find a way toward creating super fast computations using DNA molecules.
In research described in the journal Nature Nanotechnology, the scientists have developed a method for spatially organizing DNA molecules in regular intervals on a DNA origami surface. That surface is essentially a bunch of DNA strands that have been folded in ways similiar to the techniques of the Japanese art of paper folding. The results offer a new approach to creating DNA logic gates and and the interconnects that link them.