Tech Talk iconTech Talk

SK Telecom talks about its 5G plans

SK Telecom Exec Talks 5G Tech, 2017 Trials

Around the globe, telecom companies are placing bets on which technologies will define 5G, the next generation of wireless and cellular networks. Many have already published results from early lab tests and field trials. Now, executives are eager to move into real-world pilots and, gradually, to commercial deployments.

SK Telecom in South Korea, which has aggressively invested in 5G research, is considered an industry leader among carriers. The company serves 29 million customers, many of whom already receive some of the fastest and most reliable 4G coverage in the world. Those customers are densely packed into cities within a small country, which makes it easier for the company to deploy new equipment or upgrade its existing network.

Competitive forces are also at work. One of SK Telecom’s fiercest competitors, KT Corporation (formerly Korea Telecom) has pledged to launch 5G service in time for the upcoming Winter Olympics in South Korea; it was named the official telecom sponsor of the games, which will kick off in February 2018.

Last week at Mobile World Congress, Jong Han Parka manager at SK Telecom’s 5G Tech Lab, spoke with IEEE Spectrum about the company’s outlook for 5G in 2017 and beyond. Park says that by the end of the year, SK Telecom will launch a 5G trial that will move out of the lab and into the real world, so that customers can directly experience new 5G service.

Anyone who follows 5G knows that SK’s plan could mean a lot of different things. Unlike the 4G LTE networks that many people primarily rely on for cell service today, 5G, engineers hope, will be used in many other ways.

SK Telecom’s first real-world trial could be to provide service for connected cars. It might bring fixed wireless access to people’s homes. Or it may deliver faster mobile service to smartphone customers. Right now, SK Telecom itself doesn’t even seem to really know. Park says the company is still working out the details, and that these possibilities “are all candidates” for the trial.

During the interview, he also hinted that early 5G service in South Korea might look very different from the first deployments planned in the United States and elsewhere. Those U.S. deployments will primarily involve fixed wireless access, in which Internet service is beamed over the air from a base station to an antenna in someone’s home.

Instead of going that route, Park says SK Telecom will take cues from its customers, who may prove most interested in paying for applications such as livestreaming virtual reality or using telepresence features. It's not yet clear how many customers might have access to the company’s 2017 trial.

“We think fixed wireless access is one of the services, but we call 5G a mobile network service—just like 4G,” Park says. “Without a mobile network, it’s not 5G. So we are thinking of mobility as one of the keys, even in the very initial phase of developing 5G trial services.”

Since international standards for 5G are still being set, any technologies used in the company’s 2017 trial will be considered pre-standard. SK Telecom intends to feed the results into the standards-setting process for 5G New Radio, the term for the new radio-to-air interface that will someday appear on base stations and wireless devices.

Park says his company will model the trial on the latest developments around 5G New Radio, which is scheduled to be more narrowly defined this year by the 3rd Generation Partnership Project. Once standardization is complete, Park estimates SK Telecom will need one year to test any new technology before it can be deployed, and he hopes to be ready to commercialize at least some 5G services in 2019.   

Park says that one of the key 5G technologies that SK Telecom is developing is network slicing—a virtual technique that allows carriers to split up their network into more specialized pieces that can provide specific services.

For example, connected cars will most likely require a network with a lot more redundancy than tiny sensors that occasionally transmit data from a factory floor. With network slicing, a carrier would be able to reserve part of its network for cars, and another part for sensors—without risking interference between the two.

Park says this approach will enable SK Telecom to divide its network into “micro-functions” that it can offer to clients depending on their needs, instead of designing one massive 5G network to serve everyone. “The future network will be a lot more programmable than it is today, and a lot more dynamic,” he says.

Last year, SK Telecom announced that it was teaming up with Ericsson and Qualcomm on a 2017 trial for the yet-to-be-defined 5G New Radio, and shared information on some of the technologies the partners intend to test. They will start with MIMO, or multiple input multiple output, which is a fancy way of saying adding more antennas that can work together to both transmit and receive signals.

In addition to MIMO, they will also use a modulation technique known as orthogonal frequency-division multiplexing (OFDM) that is already common in Wi-Fi. With it, a data stream can be diced up and transmitted over several channels on different frequencies at once.

During Mobile World Congress, SK Telecom and Samsung also demonstrated five-band carrier aggregation on LTE-Advanced, which is a technique that borrows unused spectrum from other carriers in order to build larger bandwidths and deliver higher data rates to customers than would be possible if a carrier were to use their own spectrum exclusively. Park called this “a step towards 5G.”

Two women surrounded by computing equipment study a device made from tubes and wires

IBM Expanding Cloud Quantum Computer Tenfold

IBM is building the first commercially available universal quantum computer, one it aims to have accessible via the cloud within the next five years to explore science and business applications, the company announced today.

Whereas classical computers switch transistors either on or off to represent data as ones and zeroes, quantum computers use quantum bits or "qubits" that, because of the surreal nature of quantum physics, essentially are simultaneously both 1 and 0. This means a qubit can perform multiple calculations at once. So quantum computers can solve certain problems much faster than classical computers. 

 In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the universe. IBM aims to construct a roughly 50-qubit system based on superconducting circuits. The upcoming quantum computer will be accessible online via IBM's Cloud platform.

Read More
IoT-to-blockchain proof of concept is a luxury yacht

Mobile World Congress 2017: Startup Ubirch Sails the Blockchain Into a New Application—IoT

As Karim Attia steered his yacht, Serenity, from its slot in Barcelona’s historic Port Vell on Wednesday evening, two small onboard sensors dutifully recorded the temperature, humidity, and air pressure from around the vessel. Then, every two minutes, they logged that information in the Bitcoin blockchain, creating a permanent and unimpeachable record of the journey.

By the time the sun had set and Serenity slid back into her slot, the sensors had made 507 entries to the blockchain that would be virtually impossible for anyone else to change. If any of the passengers were to dispute the details of the trip, or if the coast guard were to investigate the vessel following an accident, the blockchain would reflect the true story.

Attia is a seed investor in a startup called ubirch, which developed the open-source software that captured the data from his yacht and entered it into the blockchain earlier this week. By doing so, the company created a record that, at least in theory, can never be edited or erased.

Increasingly, blockchains are being used in ways that have little to do with Bitcoin, and a lot more to do with embedding data for other purposes. One such application is the growing need to manage the deluge of data created by the connected sensors, gadgets, and appliances loosely defined as the Internet of Things.

In a nutshell, Ubirch wants to leverage the Bitcoin blockchain to secure all that data; to show how this would work, Attia lent his yacht and sailing skills to ubirch's founders for an onboard demo during Mobile World Congress.

There’s one important feature of the blockchain that makes this whole idea possible. Whenever someone makes a Bitcoin transaction, they can also attach a small amount—up to 40 bytes—of data to each entry, through a function known as OP_RETURN. Using that function, ubirch can store data on behalf of its clients.

To immortalize information in this way, ubirch must first make a small payment in order to initiate a Bitcoin transaction for each event it wishes to log. In the company’s model, that payment becomes less about Bitcoin itself and more about compensating miners, who are the custodians of the blockchain, for the services they provide in adding new entries and verifying the information within the updates.

Obviously, even the small amounts of money paid to embed data in the blockchain can add up. To reduce the cost for clients, ubirch actually records data in two steps. First, it adds every single data point from its clients’ sensors to the company’s own private blockchain. Then, it collects these data points into bunches, and adds these bunches to the Bitcoin blockchain every 10 minutes or so. Whenever ubirch adds a bunch of new data to the blockchain, it receives a hash (a string of numbers) that it can use to look up the entry to see how many miners have confirmed the transaction. This code is also good for verifying that the data has not been changed.

In order to pull all of this off, ubirch developed a program capable of: taking data points from sensors and translate that data into hashes for the ledger of its private blockchain and combining several of those hashes into entries in the Bitcoin blockchain. The upshot, says the company, is that this approach means clients pay only fractions of a penny for each data point logged by their sensors, instead of the full cost of a Bitcoin transaction.

Ubirch’s system can capture information from IoT sensors attached to pretty much anything a person may want to monitor. Aboard Serenity, two sensors recorded humidity, temperature, and air pressure, and reported the data once every two minutes.

One sensor (yellow on the chart) sent data from the motor room just beneath the stairs that descend to the cabin; the other (green) one transmitted from on deck near the cockpit, in the open well where passengers can recline with a glass of wine.

As the yacht set sail, the temperature on deck dropped and the temperature in the motor room rose, as one might expect. Both sensors transmitted their data to ubirch over GSM, an early cellular and satellite network with reliable global coverage.

In addition to logging data to the blockchain, ubirch’s software also uses public-private encryption to verify that data has come from a specific sensor, and to protect the data as it travels to ubirch. Each sensor has its own private key, based on a mathematical formula, that it uses to encrypt the data it sends—which can then be decrypted using a public key it has previously shared with ubirch.

These security measures require the sensor itself to have an encryption layer. Ubirch provides this layer in its own sensors and can also apply it to most off-the-shelf sensors. (It prefers those models with an ARM Cortex-M0 processor or better.) Within that layer, the company uses a specific type of public-private encryption known as elliptic curve cryptography, which relies on the obscure mathematics of elliptic curves—rather than more common factoring algorithms—to generate 265-bit keys.

Michael Merz, co-founder and chief technology officer, says encryption is a critical step that many IoT manufacturers simply ignore, or sidestep with features such as firewalls that defend the network or device rather than keep the actual data secure. “It’s not the question to protect the device—the question is protecting the data against misuse and manipulation,” he says.

Stephan Noller, ubirch’s founder and CEO, says other startups are also using the ability to embed data on the blockchain as a business model, but they do not offer encryption of the data itself. “If the sensor itself is not secure, logging it into the blockchain is pointless because that information is not secure,” he says.

Though ubirch has worked with only a few clients thus far, ubirch’s founders have no shortage of ideas for how their technology might be used. The sweet spot seems to be cases where financial decisions are made based on specific data, and both the payer and recipient must agree that the data is valid.

Merz says an insurance company could base a customer’s car insurance payments on sensors that record driving habits. With ubirch’s system, both parties could be confident that the record is accurate. In fact, one of the startup’s first clients was an insurance company that wanted to place temperature sensors on electrical boxes, to record malfunctions that may lead to building fires.

For now, ubirch has 10 employees based in Cologne and Berlin, Germany. The company has raised between $400,000 and $500,000 in its first seed round, and is considering pursuing another seed round to boost it to a total of $1 million.

Attia, the seed investor and yacht owner, is enthusiastic about the company’s prospects. “IoT is happening everywhere, but the focus on security is the differentiator from everyone else,” he says.

Noller knows the importance of that security from personal experience. He had always enjoyed connecting items in his home to the Internet until, one day, a hacker infiltrated his system and turned off all the lights, closed the shades, and disabled the fire alarm. The hacker even shut off the heat to a warming unit for the family’s pet rabbit. “That puts the security risk very much in front of you,” he says.

Four men and two women seated on a raised platform

Corporate Titans Unite to Build an Enterprise Version of the Ethereum Blockchain

Over the last two years, Ethereum, a blockchain-based platform for self-executing contracts and other applications, has attracted many admirers in the corporate world. The platform—a peer-to-peer network similar to the one that runs Bitcoin—is already supporting decentralized prediction markets, autonomously run lotteries, and a host of governance tools. And there is much speculation that the same building blocks could be used to replicate and streamline services offered by financial institutions, insurance agencies, and health-care providers, to name but a few. 

Most of the of the titans in these industries, however, have kept their distance, watching this radical new technology take shape while waiting to see whether it would embrace the needs of the companies that might want to build on it.

On Tuesday, some of the biggest names in the sectors of finance and softwareincluding Intel, Microsoft, J.P. Morgan, and Accenture—finally stepped into the fray, announcing their intention to work together on a private version of the Ethereum blockchain. Their efforts will be coordinated under the auspices of a new nonprofit organization called the Enterprise Ethereum Alliance, which will guide the engineering of a standard blockchain technology based on the Ethereum blockchain and customized for the needs of all enterprise members. 

Read More
The components of a new integrated luminaire by PureLiFi are shown from a side view, with a black ring around the light itself.

Mobile World Congress 2017: PureLiFi Debuts New Li-Fi Luminaire and Shares Progress on Commercial Pilots

Most startups dream of someday making their mark on a single industry. If all goes according to plan, PureLiFi will stir up two industries at once. As the world’s leading Li-Fi startup, the company has developed technology that delivers high-speed wireless Internet service through the LED lights found in homes and offices.

This week at Mobile World Congress, an annual gathering for the smartphone industry held this year in Barcelona, PureLiFi revealed a sleek new integrated Li-Fi luminaire and spoke to IEEE Spectrum about its plans to expand commercial pilots in the months ahead. The company’s leaders also opened up about their eagerness to ditch the Li-Fi–enabled USB dongle they debuted at the same trade show just last year.

PureLiFi, which is based in Edinburgh, has developed a technique that uses flashes of light to transmit information, instead of encoding data on a radio signal that is broadcast over the air, as done by today’s Wi-Fi routers. The flashes are so fast that they are not noticeable to the human eye. (For more on how Li-Fi works, read this story by IEEE’s The Institute.)

But so far, people can only access PureLiFi’s technology by plugging in a dongle to their laptop, which many users find inconvenient. Last year at MWC, PureLiFi revealed the world’s first Li-Fi USB dongle, called LiFi-X. The credit-card–sized dongle contains a photoreceptor that receives a signal from a Li-Fi–enabled light, and an infrared transmitter to send data back.

Based on what the PureLiFi team shared, connecting to Li-Fi in 2017 should be less clunky and more convenient. The company says this year, it will shrink its dongle to half of its current size and 40 percent of its weight.

Of course, the dongle is only part of the equation. For this year’s show, PureLiFi unveiled a new luminaire that integrates Li-Fi modulation components into a black ring designed to encircle an LED light. The ring lays flush with the ceiling and makes Li-Fi-enabled LEDs virtually indistinguishable from any other lights in a home or office.

Previously, the company’s only Li-Fi access point was a white box, roughly the size of a smartphone, that had to be mounted to the ceiling next to an LED light.

In a demo on Monday, Nikola Serafimovski, director of business strategy for PureLiFi, plugged the LiFi-X dongle into a laptop and connected to the Li-Fi network by selecting it on his screen, just as a user might choose their WiFi network. He then began to play a YouTube video and walked from below one light fixture equipped with the company’s original access point to one featuring the new integrated luminaire, demonstrating that the technology was capable of a seamless handoff as a user moves around a room or building. He placed his thumb over the dongle’s receiver to momentarily block the connection, proving it was live.

Speed is one of the reasons that so many companies are intrigued by Li-Fi. When Harald Haas, founder and Chief Science Officer of PureLiFi and a professor at University of Edinburgh, first began working on Li-Fi, he achieved a peak data rate of 10 megabits per second with some basic modulation. Now, with the help of new techniques, PureLiFi can achieve peak rates of 15 gigabits per second.

One of those techniques is wavelength-division multiplexing, which allows the company to separately encode data on the red, green, and blue channels within a white LED. With it, Haas says 100 Gbps rates are on the horizon.

In real-world settings, the company’s products deliver data rates of about 45 Mbps for both uploads and downloads. For comparison, the average broadband Internet connection in the U.S. offers download speeds of 54 Mbps, while mobile Internet service provides about 20 Mbps. Upload speeds are significantly slower.

There are some special considerations that come with using Li-Fi, such as the fact that light doesn’t penetrate walls, so every room in a house would need a Li-Fi–enabled lighting fixture. However, PureLiFi likes to pitch this factor as a feature that also keeps users’ data secure, since communications are inaccessible to anyone who is not in the same room.

At least in the eyes of Alistair Banham, CEO of PureLiFi, the company’s early trial participants have so far responded positively to the technology. “The major feedback is that they love the security of information,” he says. “The feedback on the speed has been very, very positive.”

However, there is one area where those users have told Banham that there is clearly room for improvement. “Obviously, the goal is to go beyond the dongles, and they understand that,” he says. “Everyone would like to have it integrated into their devices.”

Banham and Haas know that Li-Fi will not be widely adopted until it is integrated into the chipsets of laptops, tablets, and smartphones. In the meantime, Banham points out that both Wi-Fi and Bluetooth also started out with dongles, and that it took nearly a decade for those technologies to move to chipsets and become widely adopted. “I think we'll be much faster,” he says.

It’s not clear whether PureLiFi would manufacture a Li-Fi chipset itself, or work with a partner to do so, and Haas couldn’t comment on the company’s plans. The company is actively working with other stakeholders to develop standards for LiFi that ensure consistent performance across devices, just like the standards that exist for Wi-Fi.

Haas says the world needs to put Li-Fi in its chipsets because there is a scarcity of bandwidth available for wireless devices, and pesky interference when too many devices operate at once. “It’s inevitable that we have to go to light as a wireless means of communication,” Haas says.

Ultimately, Haas doesn’t view Li-Fi as a replacement for other wireless technologies such as 5G for cellular networks or Wi-Fi. Instead, he thinks Li-Fi will work hand-in-hand with existing technologies to provide the most bandwidth to users, rather than compete with them.

He does hope the recent commercial pilots will help them overcome some common misconceptions about Li-Fi. Haas says many people still wonder if Li-Fi works in the presence of sunlight (it does) or think it will only work if the transmitter and receiver have a clear line of sight to one another. In reality, PureLiFi has found that reflections off of walls and floors deliver perfectly adequate data rates.

Haas has now spent the past 15 years working on Li-Fi, which began as a “curiosity-driven project” in his lab. He’s not at all surprised that he is still waiting to see it be integrated into mainstream products. “For me, it was always clear, it will not be an easy road,” he says.

A photo illustration shows the icons that represent several web browsers, incluing Chrome and Firefox, in a row

Browser Fingerprinting Tech Works Across Different Browsers for the First Time

Browsing the Web just got a little less anonymous. The software that lets websites identify you by certain characteristics of your computer and software was usually thwarted if you switched browsers. But now computer scientists have developed new browser fingerprinting software that identifies users across Web browsers with a degree of accuracy that beats the most sophisticated single-browser techniques used today.

The new method, created by Yinzhi Cao, a computer science professor at Lehigh University, in Pennsylvania, accurately identifies 99.24 percent of users across browsers, compared to 90.84 percent of users identified by AmIUnique, the most advanced single-browser technique.

Browser fingerprinting is an online tracking technique commonly used to authenticate users for retail and banking sites and to identify them for targeted advertising. By combing through information available from JavaScript and the Flash plugin, it’s possible for third parties to create a “fingerprint” for any online user.

Read More
A blue-gloved hand bending a piece of silvery-plastic film containing the artificial synapse

A Low-Power Artificial Synapse Could One Day Interface With the Brain

A team based at Stanford University and at Sandia National Laboratories, in Livermore, Calif., have created a new form of “artificial synapse” that may one day be used to create flexible circuitry that could directly interface with the brain.

The new device, a form of organic transistor that the team calls an electrochemical neuromorphic organic device (ENODe), joins the ranks of technologies such as the memristor and phase-change memory: devices that may one day be used to create chips that can perform brain-like computations using far less power than even the most state-of-the-art silicon systems consume. 

In some ways, ENODe resembles a battery more than it does a transistor. Two thin polymer films are separated by an electrolyte that allows protons to pass through but not electrons. But like transistors, the device has three terminals. Current flows between two of them through the “postsynaptic” film. The third terminal is attached to the other (“presynaptic”) film. A voltage pulse at the presynaptic terminal causes charges to flow through the electrolyte either in or out of the postsynaptic film. This results in a change in the oxidation level which determines how much the film resists the flow of current through it. The details were published this week in Nature Materials

According to co-author Alec Talin of Sandia, the new device circumvents a limitation that other (two-terminal) artificial synapses suffer from: the fairly high level of energy needed to switch from one state to another. Typically, if the energy barrier to switch is too low in those devices, thermal fluctuations are enough to overcome it, meaning the device can switch states at random. “There’s no way to get around it,” Talin says, “because if you lower that voltage, it would simply switch back.”

The ENODe doesn’t have this problem, the authors write, because the barrier that maintains the state of the device is unrelated to the one that governs switching.  As a result, the device can be designed to need little voltage to switch, and yet it will retain the resulting state for a long time. “Just like battery, once you charge it stays charged,” says Talin, who co-led the research with Alberto Salleo at Stanford.

The new device can exhibit more than 500 different states of conductivity within about a 1 V range, using a switching voltage of roughly 0.5 millivolts between adjacent states. That might sound exceedingly low for those familiar with modern silicon transistors and so are accustomed to thinking in volts. But the device still uses about 10,000 times as much energy as a biological synapse would, a Stanford press release reports. Miniaturization is expected to help with that. 

The device has several electronic properties that make it especially attractive for brain-mimicking neuromorphic computing chips, Talin says. And he notes that the same material used to make these devices has already been used to interface with live brain cells. “That opens up a possibility of interfacing live biological cells [with circuits] that can do computing via artificial synapses,” Talin says. “We think that could have huge implications in the future for creating much better brain-machine interfaces.”

To demonstrate how these devices would work, the researchers built a simple circuit that demonstrated Pavlovian learning—learning to associate one signal with another, like the sight of food with the ringing of a bell. The team also simulated how the devices would perform when combined to create a neural network, testing their ability to recognize handwriting.

Among the researchers’ next steps, Talin says, will be to build an array of these devices, to demonstrate their capabilities as a real-world neural network. They also plan to work on improving the speed of the device, which in this first demonstration took around 10 milliseconds to switch.

Mildred Dresselhaus, IEEE Medal of Honor winner, dies at 86

Mildred Dresselhaus, IEEE Medal of Honor Recipient Known as the "Queen of Carbon," Dies at 86

This week, the nanoscale withdrew from the larger world just a little. Mildred Dresselhaus, emerita professor of physics and materials science at MIT and Presidential Medal of Freedom winner (as well as IEEE Medal of Honor recipient), died on Monday at the age of 86.

Called the “Queen of Carbon Science,” Dresselhaus pioneered the study of carbon nanostructures at a time when studying physical and material properties of commonplace atoms like carbon was out of favor. Her visionary perspectives on the sixth atom in the periodic table—including exploring individual layers of carbon atoms (precursors to graphene), developing carbon fibers stronger than steel, and revealing new carbon structures that were ultimately developed into buckyballs and nanotubes—invigorated the field. 

“Millie Dresselhaus began life as the child of poor Polish immigrants in the Bronx; by the end, she was Institute Professor Emerita, the highest distinction awarded by the MIT faculty. A physicist, materials scientist, and electrical engineer, she was known as the ‘Queen of Carbon’ because her work paved the way for much of today's carbon-based nanotechnology,” MIT president Rafael Reif said in a prepared statement. 

Friends and colleagues describe Dresselhaus as a gifted instructor as well as a tireless and inspired researcher. And her boundless generosity toward colleagues, students, and girls and women pursuing careers in science is legendary.  

“Everything I do, I’m aware that Millie has her imprint on me,” said H. Eugene Stanley, director of the Boston University Center for Polymer Studies, when interviewed in 2015 for an IEEE Spectrum profile of Dresselhaus. “She’s unbelievably kind. She’s always helping everybody. She runs seminar series, big and little. And she’s a woman of high principles.” 

Professor Gang Chen, now head of MIT’s mechanical engineering department, recalls first meeting her in 1996. “At the time, I did not know anything on thermoelectrics, and I was surprised to hear she knew my name, maybe from many of her activities including knowing my Ph.D. advisor,” he said. “It was my first time writing a big proposal, and her beautiful handwriting (although difficult to read for me at the beginning) and careful editing of the proposal truly impressed me, even if I had not met her.  In 1997, I met her in person at UCLA during our MURI program kickoff meeting, where I moved to take an associate professor position.  She was so approachable, engaging in detailed scientific discussions, and caring.  These impressions only get reinforced more as our collaboration continues.”

In 1963, Dresselhaus began her own career studying carbon by publishing a paper on graphite in the IBM Journal for Research and Development, a foundational work in the history of nanotechnology. To this day, her studies of the electronic structure of this material serve as a reference point for explorations of the electronic structure of fullerenes and carbon nanotubes. Coauthor, with her husband Gene Dresselhaus, of a leading book on carbon fibers, she began studying the laser vaporation of carbon and the “carbon clusters” that resulted. Researchers who followed her lead discovered a 60-carbon structure that was soon identified as the icosahedral “soccer ball” molecular configuration known as buckminsterfullerene, or buckyball. In 1991, Dresselhaus further suggested that fullerene could be elongated as a tube, and she outlined these imagined objects’ symmetries. Not long after, researchers announced the discovery of carbon nanotubes. 

When she began her nearly half-century career at MIT, as a visiting professor, women consisted of just 4 percent of the undergraduate student population.  So Dresselhaus began working toward the improvement of living conditions for women students at the university. Through her leadership, MIT adopted an equal and joint admission process for women and men. (Previously, MIT had propounded the self-fulfilling prophecy of harboring more stringent requirements for women based on less dormitory space and perceived poorer performance.) And so promoting women in STEM—before it was ever called STEM—became one of her passions. Serving as president of the American Physical Society, she spearheaded and launched initiatives like the Committee on the Status of Women in Physics and the society’s more informal committees of visiting women physicists on campuses around the United States, which have increased the female faculty and student populations on the campuses they visit. 

Dresselhaus won many of the scientific and technical awards that an outstanding researcher in her field could have won, including the Kavli Prize, the Enrico Fermi Award, the IEEE Medal of Honor, the Medal of Science, the Presidential Medal of Freedom, and membership in the National Academy of Sciences and the National Academy of Engineering. 

As it happens, less than two weeks before her death, General Electric released a web video imagining a world in which Dresselhaus earned the celebrity and acclaim normally accorded to movie stars and musicians. “A Millie Dresselhaus doll!” a girl exclaims after she opens her birthday present, as a narrator asks, “What if we treated great female scientists like they were stars?” Dresselhaus, pausing to give strangers the opportunity to take a selfie with her, carries herself through the video with the warmth and gentle humor she showed throughout her remarkable life. 

“My first impression is like my last impression,” said Jean-Paul Issi, emeritus professor of physics at the Catholic University of Louvain, in Belgium, in a 2015 interview. “I was very impressed.” 

Specialized chips for energy-efficient AI

To Get AI in Everyday Gadgets, Engineers Go to Specialized Hardware

Thanks to an artificial intelligence technique called deep learning, computers can now beat humans at the Go game, identify melanomas as accurately as dermatologists do, and help autonomous vehicles navigate the world. Now, circuit designers are working on hardware they hope will lead to the democratization of deep learning, bringing the powerful method to the chips inside smart phones, wearables, and other consumer electronics.

Mobile phones, for example, will do a better job of understanding our individual accents and linguistic quirks. (This will save many of us from being constantly upset with a daft digital assistant. Right,Siri?) And home security systems will respond to the sound of a burglar breaking a window, but know not to alert the police when someone is clumsily emptying the dishwasher.

To that end, last week at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, academic and industry engineers showed how they have built on work presented at last year’s conference to produce specialized, energy efficient deep-learning processors. This dedicated hardware will give electronic devices a new level of smarts because, unlike traditional software, it relies on high-level abstraction like the human brain. What’s more, it won’t drain the gadgets’ batteries. “We’re beginning to see that there is a need to develop more specialized hardware to get both performance and energy efficiency,” says Mahesh Mehendale, TI Fellow at Texas Instruments in Bangalore. He co-chaired the conference session with Takashi Hashimoto, chief engineer in the technology development laboratory at Panasonic.

The first step to widespread adoption of deep learning is for companies to start marketing dedicated, low-power chips. For that reason, says Mehendale, the session’s entry from STMicroelectronics is significant. Like many projects of this sort, the company’s chip uses an architecture that brings memory and processing closer together. Compared to other algorithms, neural networks require frequent fetching of data; shortening the distance this data has to travel saves energy. Guiseppi Desoli, a researcher at STM’s Cornaredo, Italy, outpost, presented a neural network processor that can perform 2.9 trillion operations per second (teraops) per watt.

STMicroelectronics’ processor is designed to run algorithms called convolutional neural networks, which are used for image recognition. During his presentation, Desoli said the company believes neural networks can be incorporated into the Internet of Things—if designers can get power use down. “A normal battery will only last a few hours” when powering a deep-learning processor that can perform only a few teraops per watt, he said.

Hoi-Jun Yoo’s group at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, described a chip that pulls off a different feat. Not only is it more energy efficient than the one from STMicroelectronics (performing 8.1 teraops per watt), it can also run two kinds of neural networks. One, a convolutional neural network, is best for image recognition, because such networks excel at understanding information in photographs and other static images. The other, a recurrent neural network, can grapple with a sequence of information, because it remembers the previous input. Such networks are used for tasks like decoding speech; whether you’re listening or talking, you have to remember what was said a few seconds ago for a conversation to make sense.

Yoo’s group demonstrated a second deep-learning processor paired with an image sensor. The resulting gadget: a low-power, wearable badge that recognizes faces. This device relies on a specialized architecture that runs a convolutional neural network at 620 microwatts. That trickle of power is small enough for a coin cell battery to keep it running for more than 10 hours. In one demo, the KAIST system labeled photos with “Julia Roberts” and “pizza.” It can also spot Matt Damon, should the wearer ever come across him in person.

Another issue engineers delved into at ISSC was systems-level design. One way to save energy is to use low-power circuits to make initial decisions, then, when necessary, wake up relatively more power-hungry neural networks to do the heavy lifting. Anantha Chandrakasan’s lab at MIT presented a chip that uses a circuit to distinguish speech from other sounds. This circuit gates a neural network that can then recognize words. The MIT chip can perform tasks requiring a vocabulary of up to 145,000 words. It makes about one-fourth the number of word errors on a standardized test compared with the previous state-of-the-art system, while using about just one-third of the power of its predecessor.

The new chips presented last week, says Mehendale, show that “customized hardware is more efficient” for running neural networks. Training neural networks is another matter. Today, it must still be done on powerful computers. In coming years—perhaps next year—researchers will develop dedicated hardware for the deep learning training process, which will make that energy intensive process more efficient, says Panasonic’s Hashimoto.

A NIST employee in a safety vest examines a wireless experiment inside of a steam generation plant.

Factory Owners Are Reluctant to Embrace Wireless

If you think it’s hard to get a reliable Wi-Fi signal in your home, just imagine how tough it must be grab one atop an oil rig in the Gulf of Mexico, or on the noisy floor of an auto factory in Detroit. Those places are full of heat, vibration, and metallic surfaces that can weaken, reflect, and block signals. As a result, factories and industrial facilities have been slow to adopt new wireless equipment and devices that would otherwise save both time and money.

Many wireless engineers and factory owners know this, but it has been difficult for anyone to improve the situation. The impact of industrial settings on wireless performance hasn’t been studied in any systematic way, so it’s often impossible to predict how a new piece of equipment will perform on, say, a manufacturing line until you actually put it there.

To make it easier for factories to integrate new wireless technologies, U.S. federal government employees took it upon themselves to measure the performance of radiofrequency signals in three factory settings: an auto transmission assembly facility, a steam generation plant, and a small machine shop. They recently published their results as part of an ongoing $5.75 million project aimed at improving industrial wireless led by the National Institute of Standards and Technology (NIST).

For factory owners, there are many potential advantages to switching to wireless. They can avoid the costs and hassle of installing wires, and more easily reconfigure their facilities in the future. Wireless setups may also be safer, because employees won’t trip over bundles of cords. That’s why companies including GM, Ford, Chevron, Boeing, and Phoenix Contact (a company that specializes in industrial technologies) have all expressed interest in incorporating more wireless into these facilities.

“Right now I know that people are interested, but what they're worried about are the impacts to productivity or to the operation,” says Richard Candell, the project lead for the five-year NIST project, which is scheduled to conclude in late 2018. “They want to know that if they're going to use wireless, it's going to work just as well as the wired solution.”

Justin Shade, who focuses on wireless products for Phoenix Contact, says there’s no shortage of ways in which wireless could make factories and their workers more efficient. For example, manufacturers could use it to incorporate robotic arms into assembly lines. Today, robotic arms are often hooked up to control panels by flexible cables. Wind turbines rely on similar cables to maintain contact between the hub of the turbine and each individual blade. But these cables frequently break. In both cases, replacing them with wireless controls could save money and time.

Unfortunately, factories are also full of processes and materials that block or weaken wireless signals. For now, wireless technicians play it safe when installing new equipment by setting up redundancies, keeping wireless devices within close range with clear line of sight to their targets, and performing extensive testing prior to industrial installations.

Given the circumstances, Shade says it’s hard to fault factory owners and their technicians for being cautious. “If you're on the manufacturing line and a car door doesn't get made correctly, you're losing hundreds of thousands of dollars an hour, so the adoption has been a little slower in the industrial world,” he says.

Candell at NIST hopes their latest research can help industry operators predict how new systems will perform before they are installed. To take their measurements, the team visited an auto transmission assembly plant in Detroit, Mich., a steam generation plant at the NIST campus in Boulder, Colo., and a small machine shop that specializes in metalworking for NIST at their facilities in Gaithersburg, Md.

The group tested wireless signal propagation at two frequencies: 2.25 gigahertz and 5.4 GHz. These frequencies are reserved for the U.S. government, but fall close to the popular unlicensed 2.4-GHz and 5-GHz bands commonly used in wireless devices. Performance at these frequencies can therefore be considered comparable to what can be expected for wireless gadgets the rest of us use.

From their measurements, the researchers concluded that industrial settings have strong multipath characteristics, which means that signals tend to reflect many more times before they reach the receiver than they would under normal conditions. The practical impact of these reflections can be positive or negative, depending on the technology and how it is configured.

To dig deeper, the group used a metric to measure wireless performance called the K factor. It compares the combined power of all the reflected signals to the power of a line-of-sight signal with no reflections. A higher K factor means there is less fading due to reflections. In an open outdoor area, the K factor would typically be between 6 decibels and 30 dB. In the group's industrial measurements, they found lower average K factors of -5 dB to 6 dB.

Next, the NIST team used their measurements to estimate the average delay spread for the industrial facilities. Delay spread is the time it takes for all of a signal’s reflections to reach the receiver. They found an average delay spread below 500 nanoseconds. The group suggests this delay may not noticeably impact devices operating at 256 kilobits per second but could affect those that run at faster bit rates.

Another part of their analysis examined wireless performance in “metal canyons,” which are common in factories. A metal canyon is an area with metal surfaces (such as walls or large pieces of equipment) on at least two sides and a concrete floor below. In these areas, the group measured path loss, which describes the attenuation of wireless signals, and found that it is 80 dB, at a minimum, in metal canyons. For comparison, the path loss in an open area would be perhaps 40 dB after a signal at these frequencies traveled approximately one meter.

Candell says that, in practical terms, this means a wireless signal could reliably travel about 200 or 300 meters outdoors, whereas, in a metal canyon, a user would probably start to notice some issues with the signal at just 30 meters away. 

With the results of their measurement campaigns, the NIST staff also built a software simulation of a chemical reactor and a wireless test bed that can replicate other industrial settings at their campus in Boulder, Colo. Candell wants to use these tools to generate hypothetical changes in performance and cost related to installing new wireless schemes in factories or other facilities.

“Ultimately, at the end of our five-year project [which is scheduled to conclude in late 2018], I want to actually produce industry guidelines to help people select and deploy these wireless devices effectively in their factories,” says Candell.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More