Tech Talk iconTech Talk

Popular Internet of Things Forecast of 50 Billion Devices by 2020 Is Outdated

If you follow discussions about the Internet of Things, you’ve probably heard this stunning prediction at least once: The world will have 50 billion connected devices by 2020. Ericsson’s former CEO Hans Vestburg was among the first to state it in a 2010 presentation to shareholders. The following year, Dave Evans, who worked for Cisco at the time, published the same prediction in a white paper.

Today, that figure has arguably done more than any other statistic to set sky-high expectations for potential IoT growth and profits. Remarkably, those projections weren’t even close to the highest of the time—in 2012, IBM forecasted 1 trillion connected devices by 2015. “The numbers were getting kind of crazy,” recalls Bill Morelli, a market research director for IHS Markit.

Read More
A view of QUESS at the Shanghai Engineering Center for Microsatellites in May

China Launches World's First Quantum Communications Satellite

The first spacecraft designed to perform quantum communications was launched into space today, from the Jiuquan Satellite Launch Center at 1:40am local time.

The Chinese mission, dubbed Quantum Experiments at Space Scale (QUESS), is a milestone for researchers building the technology needed to create large-scale quantum communications networks. Thanks to the fundamental nature of quantum mechanics, which is sensitive to observation and prohibits the copying of unknown states, quantum links should in principle be unhackable. Gregoir Ribordy of the quantum cryptography firm ID Quantique told the Wall Street Journal that a quantum transmission is like a message scribbled on a soap bubble: “If someone tries to intercept it when it’s being transmitted, by touching it, they make it burst.”

Free of turbulent air (except for what you hit between Earth and orbit) and the distortions of fiber, space is an attractive place to pursue quantum communications. QUESS, which boasts the ability to generate pairs of entangled photons, will perform experiments in quantum entanglement and teleportation, Nature reports. But the first order of business will be quantum key distribution, “to establish a quantum key between Beijing and Vienna, using the satellite as a relay,” lead scientist Pan Jian-Wei told Nature in a Q&A published early this year.

Last year, Thomas Scheidl, a member of the Austrian Academy of Sciences team that is collaborating with Pan and his colleagues, explained to IEEE Spectrum how the process would work: 

 The satellite flies over a ground station in Europe and establishes a quantum link to the ground station, and you generate a key between the satellite and the ground station in Europe. Then, some hours later, the satellite will pass a ground station in China and establish a second quantum link and secure key with a ground station in China.

The satellite then has both keys available, and you can combine both keys into one key...Then you send, via a classical channel, the key combination to both of the ground stations. This you can do publicly because no one can learn anything from this combined key. Because one ground station has an individual key, it can undo this combined key and learn about the key of the other ground station.

With any luck, the two-year mission will be the first in a string of quantum communications spacecraft—and a progenitor of secure quantum communication for the masses. 

Intel’s Diane Bryant, executive vice president and general manager of the Data Center Group, with Nervana’s cofounder Naveen Rao

The Nervana Systems Chip That Will Let Intel Advance Its Deep Learning

Deep-learning artificial intelligence has mostly relied upon the general-purpose GPU hardware used in many other computing tasks. But Intel’s recent acquisition of the startup Nervana Systems will give the tech giant ownership of a specialized chip designed specifically for deep learning AI applications. That could give Intel a huge lead in the race to develop next-generation artificial intelligence capable of swiftly finding patterns in huge data sets and learning through imitation.

Read More

Stretchable Touch Pad Could Become Wearable Touch Screen

Video: Kim et al. Science (2106)

A new, stretchable transparent touch pad can be used to write words and play electronic games, and it may even one day be implanted inside the body, its inventors say.

Touch pads and touch screens are on nearly every smart device these days. But they can’t go on anything flexible, such as the human body. Scientists have explored stretchable touch panels based on carbon nanotubes, metal nanowires, and other advanced materials, but the performance of these stretchable touch panels fell off sharply when they were stretched. Just as bad, they also fell apart over time when repeatedly stretched.

To overcome these problems, scientists at Seoul National University created a touch pad made of the same kind of soft and very stretchable hydrogel used to make soft contact lenses. The hydrogel involved contains lithium chloride salts, which are electrically conductive and help the hydrogel hold onto the water it needs to stay soft.

Read More
A photo of Transhumanist Party presidential candidate Zoltan Istvan standing in front of an American flag.

AI for President

Zoltan Istvan, who represents the Transhumanist Party and bills himself as “the science candidate” in the 2016 U.S. presidential election, has garnered more media coverage than many third party candidates, with recent mentions in Vocativ, The Verge, USA Today, and Pacific Standard. He also writes regularly for Motherboard and The Huffington Post.

Istvan’s popularity is likely due to a combination of his quirky campaign style (he drives around in a bus painted to resemble a coffin with “Science vs. The Coffin” written above the bumper) and an unconventional platform that pushes for gene editing, human life extension, and morphological freedom (the right to do anything to your body so long as it doesn’t harm others). As a broader movement, transhumanism focuses on leveraging science and technology toward the ultimate goal of overcoming death, largely through as-yet-unproven methods such as mind uploading, in which a person’s entire consciousness would be transferred to a digital system or machine.

Read More

Delta Airlines: On Second Thought, the Computer Crash Was Our Fault

After having been called out by Georgia Power, the utility that delivers electric power to its Atlanta hub, Delta Airlines finally came clean. It admitted that the crash of its computer network at 2:30 a.m. EDT on Monday, 8 August, had nothing whatsoever to do with the power company—after Georgia Power came forward and confirmed that none of its other customers in that area had experienced a power outage.

Read More
Voltage pulses change the circular polarization of the light emitted by the BEC from left to right.  Four continuous laser beams compensate for natural losses of the BEC

A Bose-Einstein Condensate Device Could Make Optical Computing Work

It took 70 years before the existence of Bose-Einstein condensates (BECs), so-called superatoms, could be proven in the lab. BECs are collections of particles called bosons (such as the photon), that have integer spin and can occupy the same space with the same quantum state, forming a condensate that behaves like a single atom. At first, these condensates—first predicted  by Satyendra Nath Bose and Albert Einstein in the mid-1920s—could only be obtained by freezing out collections of particular particles with temperatures close to absolute zero. Room-temperature BECs appeared in the lab within two years.  And more recently, researchers have been looking to turn these superatoms from a laboratory curiosity into lasers and other practical things, with mixed results. 

Among these attempts at a practical BEC are those composed of exciton polaritons. These quasiparticles are amalgams of photons and electron-hole pairs (excitons), and they can carry information both in the form of optical polarization, the property of the photon, and in the form of spin, associated with the exiton. However, what made polaritons impractical was that their spin could only be controlled by light or by strong magnetic fields.

In yesterday’s issue of Nature Materials, a team of researchers led by physicist Jeremy Baumberg at the University of Cambridge report that they could use low-energy voltage pulses to read and write data in a BEC formed by polaritons trapped in several thin layers of semiconductor material.

Read More
People waiting in line at an airport

Delta Airlines Computer Failure Part of a Pattern

If you had plans to travel on Delta Airlines today or tomorrow, you had plans. At this point, you might want to consider another carrier, a rental car, a bus, or even a bicycle with a basket on the handlebars. That’s because, according to Delta, a power outage that wreaked havoc on its hub in Atlanta brought down the airline’s computer system. And one expert says it’s just the latest data point in a string of airline industry IT problems.

Read More
A cybersecurity illustration showing a keyboard with a graphic of a bug superimposed on top.

Autonomous Security Bots Seek and Destroy Software Bugs in DARPA Cyber Grand Challenge

The mission: to detect and patch as many software flaws as possible. The competitors: seven dueling supercomputers about the size of large vending machines, each emblazoned with a name like Jima or Crspy, and programmed by expert hacker teams to autonomously find and fix malicious bugs.

These seven “Cyber Reasoning Systems” took the stage on Thursday for DARPA’s Cyber Grand Challenge at the Paris Hotel and Conference Center in Las Vegas, Nev. They were competing for a $2 million grand prize in the world’s first fully autonomous “Capture the Flag” tournament. After eight hours of grueling bot-on-bot competition, DARPA declared a system named Mayhem, built by Pittsburgh, Pa.-based ForAllSecure as the unofficial winner. The Mayhem team was led by David BrumleyXandra, produced by TECHX from GammaTech and the University of Virginia, placed second to earn a $1 million prize; and Mechanical Phish by Shellphish, a student-led team from Santa Barbara, Calif., took third place, worth $750,000.

DARPA is verifying the results and will announce the official positions on Friday. The triumphant bot will then compete against human hackers in a “Capture the Flag” tournament at the annual DEF CON security conference. Though no one expects one of these reasoning systems to win that challenge, it could solve some types of bugs more quickly than human teams.

Darpa hopes the competition will pay off by bringing researchers closer to developing software repair bots that could constantly scan systems for flaws or bugs and patch them much faster and more effectively than human teams can. DARPA says quickly fixing such flaws across billions of lines of code is critically important. It could help to harden infrastructure such as power lines and water treatment plants against cyberattacks, and to protect privacy as more personal devices come online.

But no such system has even been available on the market. Instead, teams of security specialists constantly scan code for potential problems. On average, it takes specialists 312 days to discover a software vulnerability and often months or years to actually fix it, according to DARPA CGC host Hakeem Oluseyi.

“A final goal of all this is scalability,” says Michael Stevenson, Mission Manager for the Deep Red team from Raytheon. “If [the bots] discover something in one part of the network, these are the technologies that can quickly reach out and patch that vulnerability throughout that network.” The original 2005 DARPA Grand Challenge jumpstarted corporate and academic interest in autonomous cars.

The teams were not told what types of defects their systems would encounter in the finale, so their bots had to reverse engineer DARPA’s challenge software, identify potential bugs, run tests to verify those bugs, and then apply patches that wouldn’t cause the software to run slowly or shut down altogether.

To test the limits of these Cyber Reasoning Systems, DARPA planted software bugs that were simplified versions of famous malware such as the Morris worm and the Heartbleed bug. Scores were based on how quickly and effectively the bots deployed patches and verified competitors’ patches, and bots lost points if their patches slowed down the software. “If you fix the bug but it takes 10 hours to run something that should have taken 5 minutes, that's not really useful,” explains Corbin Souffrant, a Raytheon cyber engineer.

Members of the Deep Red team described how their system accomplished this in five basic steps: First, their machine (named Rubeus) used a technique called fuzzing to overload the program with data and cause it to crash. Then, it scanned the crash results to identify potential flaws in the program’s code. Next, it verified these flaws and looked for potential patches in a database of known bugs and appropriate fixes. It chose a patch from this repository and applied it, and then analyzed the results to see if it helped. For each patch, the system used artificial intelligence to compare its solution with the results and determine how it should fix similar patches in the future.

During the live competition, some bugs proved more difficult for the machines to handle than others. Several machines found and patched an SQL Slammer-like vulnerability within 5 minutes, garnering applause. But only two teams managed to repair an imitation crackaddr bug in SendMail. And one bot, Xandra by the TECHx team, found a bug that the organizers hadn’t even intended to create.

Whether humans or machines, it’s always nice to see vanquished competitors exhibit good sportsmanship in the face of a loss. As the night wound down, Mechanical Phish politely congratulated Mayhem on its first place finish over the bots’ Twitter accounts.

MIT's lidar chip is smaller than a dime

MIT and DARPA Pack Lidar Sensor Onto Single Chip

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Light detection and ranging, or lidar, is a sensing technology based on laser light. It’s similar to radar, but can have a higher resolution, since the wavelength of light is about 100,000 times smaller than radio wavelengths. For robots, this is very important: Since radar cannot accurately image small features, a robot equipped with only a radar module would have a hard time grasping a complex object. At the moment, primary applications of lidar are autonomous vehicles and robotics, but also include terrain and ocean mapping and UAVs. Lidar systems are integral to almost all autonomous vehicles and many other robots that operate autonomously in commercial or industrial environments.

Lidar systems measure how far away each pixel in a 3D space is from the emitting device, as well as the direction to that pixel, which allows for the creation of a full 3D model of the world around the sensor. The basic method of operation of a lidar system is to transmit a beam of light, and then measure the returning signal when the light reflects off of an object. The time that the reflected signal takes to come back to the lidar module provides a direct measurement of the distance to the object. Additional information about the object, like its velocity or material composition, can also be determined by measuring certain properties of the reflected signal, such as the induced Doppler shift. Finally, by steering this transmitted light, many different points of an environment can be measured to create a full 3D model.

Most lidar systems—like the ones commonly seen on autonomous vehicles—use discrete free-space optical components like lasers, lenses, and external receivers. In order to have a useful field of view, this laser/receiver module is mechanically spun around, often while being oscillated up and down. This mechanical apparatus limits the scan rate of the lidar system while increasing both size and complexity, leading to concerns about long-term reliability, especially in harsh environments. Today, commercially available high-end lidar systems can range from $1,000 to upwards of $70,000, which can limit their applications where cost must be minimized.

Applications such as autonomous vehicles and robotics heavily depend on lidar, and an expensive lidar module is a major obstacle to their use in commercial products. Our work at MIT’s Photonic Microsystems Group is trying to take these large, expensive, mechanical lidar systems and integrate them on a microchip that can be mass produced in commercial CMOS foundries.

Read More
Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More