After a hard slog, extreme ultraviolet (EUV) lithography seems to be closing in on a long-sought quarry: a light source bright enough to pattern chips cheaply and keep Moore’s Law marching along.
The technology, which uses 13.5-nanometer light instead of today’s 193-nanometer light, could—at least in the short term—allow chipmakers to create finer features without having to expose chips multiple times, a process that can add significantly to the expense of the manufacturing process.
But for years, EUV’s prospects were limited by the dimness of its light source. Unlike conventional lithography, which uses an ultraviolet laser, EUV generates its invisible light—just at the edge of the x-ray part of the spectrum—by turning tin into a plasma. ASML, which is developing EUV machines for the semiconductor industry, has put its support behind a particular approach called laser-produced plasma, which creates light by shooting 50,000 microscopic molten tin droplets per second across a vacuum chamber and vaporizing each one with a pulse of CO2 laser light.
At the SPIE Advanced Lithography conference in San Jose last week, ASML said it has pushed the limit of that light source to 200 W and aims to reach 250 W by the end of the year.
Things are moving at blistering speed in the world of next-generation 5G mobile communications—even though many mobile users have still or only recently upgraded to 4G LTE communications, and even while 4G continues to evolve.
Nevertheless, four of the telecom industry’s big hitters—Japan’s NTT Docomo, South Korea’s KT and SK Telecom, and Verizon in the United States—are not waiting for stragglers to catch up. Yesterday, at the annual Mobile World Congress in Barcelona, the carriers unveiled a plan to form the 5G Open Trial Specification Alliance with the aim of driving the technology forward. Meanwhile, Docomo and Sweden’s Ericsson announced that they achieved a cumulative 20-gigabit-per-second transmission speed with two connected 5G terminals in outdoor trials.
The four-party alliance seeks to hammer out agreements on technical fundamentals resulting from the companies’ individual 5G trials. They would then use their collective muscle to have the agreed upon specs “serve as a common, extendable platform for different 5G trial activity around the world,” as the announcement put it.
“In particular, we want to decide which 5G spectrum bands to use in a global industry,” Yoshihisa Kishiyama, Senior Research Engineer at Docomo’s 5G Laboratory, told IEEE Spectrum. “And we want to finalize 5G specifications by the end of 2018.”
If successful, this would help create standards for network equipment makers to follow, bringing the advent of fifth generation communications ever closer. Docomo, for one, has publicly committed itself to having 5G service up and running in time for the Tokyo Olympics in 2020.
As outlined by the International Telecommunications Union, 5G promises “a seamlessly connected society in the 2020 timeframe and beyond that brings together people along with things, data, applications, transport systems, and cities in a smart networked communications environment.”
To achieve all this, Docomo noted in a July 2014 White Paper, 5G would require data rates 100 times higher than today’s wireless networks offer, plus a reduction in latency to 1 millisecond, a 1,000-times increase in systems capacity, as well as a reduction in energy consumption. And with the coming avalanche of Internet-of-Things devices that will be continuously connected to cloud services, Docomo is targeting “a 100-fold increase in the number of simultaneously connected users compared to 4G LTE.”
It’s expected that 5G will need to utilize higher frequency spectrums ranging from 6 to 66 gigahertz. This would take it into the millimeter-wave band, which will enable multi-beam multiplexing and massive multi-input-output (MIMO) technologies.
So rather than broadcasting signals from a base station in all directions, individual signals can be transferred between individual terminals and a base station as required—and in crowded hot spots by means of a cluster of smaller antennas. This scenario should eliminate interference from nearby terminals and slowdowns in data speeds. It’s also expected to make better use of signal power and more efficient use of bandwidth.
In a trial last Sunday outside Docomo’s R&D Center in Yokosuka, just south of Tokyo, this multi-beam MIMO technology was used to transmit data with a cumulative 20-Gbps throughput. Docomo and Ericsson engineers set up four mini base-stations, each equipped with 64 antenna elements, to create one super-sensitive base station. Two Ericsson 5G prototype terminals, located 9 meters and 3 meters respectively from the base station were each able to simultaneously download over 10 Gbps over a 15-GHz wireless band.
In a separate trial on the same day, the companies successfully transmitted data at 10 Gbps over a distance of 70 meters from the base station and then at 9 Gbps over a distance of 120 meters.
“Our target [upon commercialization] is to achieve several gigabits per second in 2020 and over 10 gigabits per second after that,” says Kishiyama. He added that Docomo hopes to see the arrival of new 5G applications “earlier than 2020, so as to promote 5G before then.” But he would not say what these applications might be.
Last week, Virgin Galactic unveiled a new version of its SpaceShipTwo, which is designed to carry paying customers to the edge of space. This new vehicle makes its debut more than a year after a devastating accident that took the life of co-pilot Michael Alsbury.
If the recovery from past spaceflight disasters is any guide, this craft will be flown in a far less risky mode, with more safety features incorporated into the hardware and more safety awareness inculcated into the human minds controlling it. But the real question is what will happen when the next vehicle rolls down the line, and how safe the company’s flights will be 5 or 10 years from now.
The Federal Aviation Administration’s minimalistic approach to regulating the safety of the space tourism industry was called into question in the wake of the Virgin Galactic accident. But the bulwark against future disaster doesn’t rest in federal regulations, codified checklists, or safety gadgets. Instead it rests where it always must, in the hearts and minds of the people who make daily decisions in support of the fabrication, testing, preparation, and operation of such machinery. It is that culture, now understandably sharpened by the still-fresh loss of a human life, that will be the most effective barrier against future accidents.
In 2007, PBS launched Design Squad, a show that has no-doubt launched many an engineering career. The reality TV series, geared to spark an interest in engineering among middle and high school students, pitted two teams of high school students against each other. The teams competed to design and build an engineering project for actual clients.
An accompanying Design Squad website provides a safe place for kids ages 8–13 to share engineering ideas and activities and inspire them to explore engineering.
On February 24, PBS Kids is taking the initiative global. The Design Squad Global website connects kids in the U.S. with peers around the world, “challenging them to solve global engineering problems and expanding their cross-cultural understanding.”
Researchers in the UK have prototyped an ultimate data backup technology—a monocle-sized transparent disc that stores 360 terabytes, withstands fires and heat up to 1000º C and retains its data even over billions of years.
As IEEE Spectrum reported in 2013, the tiny quartz discs store their data as laser-etched nanostructures in the crystal. The nanostructures change the polarization of light passing through it. These changes are read off by an automated microscope algorithm that translates polarization signals back into the stream of stored data.
The system is called a “5D” data storage system, for the two extra dimensions (in addition to the standard three spatial dimensions) that polarization enables data to be encoded into the disc. And, says Peter Kazansky, professor at the Optoelectronics Research Centre at the University of Southampton, the idea began as a challenge.
Nanosatellites, small satellites with sizes ranging from a shoe box to a small suitcase are popular because they are cheap–just some communication gear and a few instruments, and because they can piggyback onto other space missions. NASA is now preparing to launch in orbit around Mars two CubeSats, small satellites that will piggyback on InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport), NASA’s mission to understand the interior structure of Mars, now likely to be launched in 2018.
The CubeSats, in orbit, will communicate with the lander and with NASA’s Mars Reconnaissance Orbiter (MRO), which will relay communication with ground control on Earth. Sending a message to Earth can take up to 20 minutes, too long to transmit and then receive urgent corrections or adjustments for such things as the misalignment of solar panels or antennas.
To avoid such mishaps in space, the satellites should be equipped with autonomous fault correction, something already available in certain drones or autonomous driverless cars, argues Hakan Kayal, a researcher at the University of Würzburg in Germany, whose expertise combines both space technology and computer science. “These technologies, as found in smartphones with their apps, are more advanced than what we normally find in space,” says Kayal, who adds that in space one turns to more secure devices and software. “We thought it would be a good idea to develop this technology for nanosatellites, allow them to functions autonomously, and allow error correction and autonomous control of mission scenarios,” says Kayal. And in future planetary missions such autonomous satellites will be able to focus their cameras and instruments on transient events without having to wait for ground control.
The Würzburg scientists have now started up two space projects, technology missions that will lead to the development and testing of autonomous nanosatellites. The first project is the construction of a nanosatellite called SONATE for “Solutus (independent) Nano Satellite” that will incorporate two systems as payload, an “Autonomous Sensor and Autonomous Planning system” or ASAP and an “Autonomous Diagnosis System for Satellites” or ADIA. Strictly a technology mission, it will be launched in an Earth orbit in 2019. By simulating faults in the satellite’s system, the researchers will test how ADIA will make a diagnosis, in order to find the root cause of the problem. It will also be able to find possible future errors by analyzing the trends within the satellite. We hope in a follow-up step or project, to be able to better correct the error; but this is not yet part of the project, remarks Kayal.
A second project, called NACOMI, “Nano Satellite Communication Interplanetary,” will focus on advanced communications technologies for nanosatellites that will be orbit around moons and planets. “NACOMI will be a prototype study in a laboratory setting; we will look at different techniques and decide which are the most promising,” says Kayal.
For planetary astronomers, the autonomous operation of instruments, such as cameras, aboard the nanosatellites will open up exciting new possibilities. “The cameras will be analyzing the scenery all the time, trying to detect changes or movements, such as meteor impacts, lightning, geysers, chemical eruptions, and transient lunar phenomena,” says Kayal. Typically, a satellite will catch these events—many shorter than a second—and store them on board. Communication bottlenecks will not allow sending images and video continuously. The satellite will have to autonomously analyze these records and decide what to do with each of them—for example, keep recording and then tweet to Earth: “What do you want to do with this recording?”.
Human skin’s natural ability to feel sensations such as touch and temperature difference is not easily replicated with artificial materials in the research lab. That challenge did not stop a Saudi Arabian research team from using cheap household items to make a “paper skin” that mimics many sensory functions of human skin.
If you get very, very lucky when you're doing the dishes, the water streaming out of your tap might occasionally splash onto a bowl or spoon in just the right way and spread out into a flawless hemispherical water curtain that means you can stop doing the dishes for a while to admire it.
Students at MIT have learned to do it on purpose. Members of MIT’s 2014 Tangible Interfaces class, taught by Professor Hiroshi Ishii, have created HydroMorph: a “dynamic spatial water membrane” that can turn this pleasingly curvy splash into a flapping bird, form it into an interactive countdown timer, direct it into a cup, and do all kinds of other things that water shouldn’t really be able to do.
Cameras and audio equipment are getting better all the time, but mostly through brute force: more pixels, more sensors, and better post-processing. Mammalian eyes and ears beat them handily when it comes to efficiency and the ability to only focus on what’s interesting or important.
Neuromorphic engineers, who try to mimic the strengths of biological systems in manmade ones, have made big strides in recent years, especially with vision. Researchers have made machine-vision systems that only take pictures of moving objects, for example. Instead of taking many images at a steady, predetermined rate, these kinds of cameras monitor for changes in a scene and only record those. This strategy, called event-based sampling, saves a lot of energy and can also enable higher resolution.
One example is a silicon retina made by Tobi Delbrück of the Institute for Neuroinformatics in Zurich; it was used as the eyes in a robotic soccer goalie. This design, made in 2007, has a 3-millisecond reaction time.
Last week, at the IEEE International Solid-State Circuits Conference in San Francisco, another group showed how this approach can also work for hearing. Shih-Chii Liu, co-leader of the Sensors Group at the Institute of Neuroinformatics, described a silicon cochlea that uses just 55 microwatts of power (three orders of magnitude less than previous versions of the system) to detect sound in a humanlike way.
The neuromorphic auditory system uses two “ears,” with each one capable of being moved independently of the other. The difference in timing between sound waves reaching the two ears makes it possible to locate the origin of a sound, says Liu. Each silicon ear has 64 channels, which each responds to a different frequency band, from low pitches to high. These channels mimic the cells in the human cochlea, which also responds to different frequencies (about a thousand in the real thing).
Liu connects the silicon cochlea to her laptop and shows what it’s recording with a graph of frequency over time. When we’re quiet, there’s no activity. When one of us speaks into the microphone, there are spikes around the 100-to-200-hertz range. The other channels, ranging from 20 Hz to 20 kilohertz, are not recording.
Liu says her group and Delbruck’s are now working to integrate the silicon cochlea and retina. This could give a humanoid robot a lot more low-power smarts. Besides being more humanlike, multimodal sensing means machines will miss less of what’s going on. This is evident in the way human senses support each other. When you talk to someone in a noisy restaurant, for example, you can’t always hear her every word. But your brain fills in the missing auditory pieces with visual information gathered when you watch their lips.
The neuromorphic researchers want to integrate these smart, low-power sensors with processors running deep learning algorithms. This kind of artificial intelligence does a good job of recognizing what’s going on in an image; some versions can even generate a surprisingly accurate sentence describing a scene. Neural networks excel at understanding and generating speech, too. Combining neuromorphic engineering with deep learning could yield computers that mimic human sensory perception better than ever before.
Asked whether this advance would someday help humans who are deaf or hard of hearing, Liu said the current design wouldn’t work for cochlear implants, so that is not an application her group is pursuing. She notes that although it could work in theory, it would probably involve some fundamental changes in hearing aid design that might cost so much to implement that the ultimate payoff would not be worth the effort.
This post was corrected on 4 April to clarify Liu’s name and affiliation.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.