Tech Talk iconTech Talk

People waiting in line at an airport

Delta Airlines Computer Failure Part of a Pattern

If you had plans to travel on Delta Airlines today or tomorrow, you had plans. At this point, you might want to consider another carrier, a rental car, a bus, or even a bicycle with a basket on the handlebars. That’s because, according to Delta, a power outage that wreaked havoc on its hub in Atlanta brought down the airline’s computer system. And one expert says it’s just the latest data point in a string of airline industry IT problems.

Read More
A cybersecurity illustration showing a keyboard with a graphic of a bug superimposed on top.

Autonomous Security Bots Seek and Destroy Software Bugs in DARPA Cyber Grand Challenge

The mission: to detect and patch as many software flaws as possible. The competitors: seven dueling supercomputers about the size of large vending machines, each emblazoned with a name like Jima or Crspy, and programmed by expert hacker teams to autonomously find and fix malicious bugs.

These seven “Cyber Reasoning Systems” took the stage on Thursday for DARPA’s Cyber Grand Challenge at the Paris Hotel and Conference Center in Las Vegas, Nev. They were competing for a $2 million grand prize in the world’s first fully autonomous “Capture the Flag” tournament. After eight hours of grueling bot-on-bot competition, DARPA declared a system named Mayhem, built by Pittsburgh, Pa.-based ForAllSecure as the unofficial winner. The Mayhem team was led by David BrumleyXandra, produced by TECHX from GammaTech and the University of Virginia, placed second to earn a $1 million prize; and Mechanical Phish by Shellphish, a student-led team from Santa Barbara, Calif., took third place, worth $750,000.

DARPA is verifying the results and will announce the official positions on Friday. The triumphant bot will then compete against human hackers in a “Capture the Flag” tournament at the annual DEF CON security conference. Though no one expects one of these reasoning systems to win that challenge, it could solve some types of bugs more quickly than human teams.

Darpa hopes the competition will pay off by bringing researchers closer to developing software repair bots that could constantly scan systems for flaws or bugs and patch them much faster and more effectively than human teams can. DARPA says quickly fixing such flaws across billions of lines of code is critically important. It could help to harden infrastructure such as power lines and water treatment plants against cyberattacks, and to protect privacy as more personal devices come online.

But no such system has even been available on the market. Instead, teams of security specialists constantly scan code for potential problems. On average, it takes specialists 312 days to discover a software vulnerability and often months or years to actually fix it, according to DARPA CGC host Hakeem Oluseyi.

“A final goal of all this is scalability,” says Michael Stevenson, Mission Manager for the Deep Red team from Raytheon. “If [the bots] discover something in one part of the network, these are the technologies that can quickly reach out and patch that vulnerability throughout that network.” The original 2005 DARPA Grand Challenge jumpstarted corporate and academic interest in autonomous cars.

The teams were not told what types of defects their systems would encounter in the finale, so their bots had to reverse engineer DARPA’s challenge software, identify potential bugs, run tests to verify those bugs, and then apply patches that wouldn’t cause the software to run slowly or shut down altogether.

To test the limits of these Cyber Reasoning Systems, DARPA planted software bugs that were simplified versions of famous malware such as the Morris worm and the Heartbleed bug. Scores were based on how quickly and effectively the bots deployed patches and verified competitors’ patches, and bots lost points if their patches slowed down the software. “If you fix the bug but it takes 10 hours to run something that should have taken 5 minutes, that's not really useful,” explains Corbin Souffrant, a Raytheon cyber engineer.

Members of the Deep Red team described how their system accomplished this in five basic steps: First, their machine (named Rubeus) used a technique called fuzzing to overload the program with data and cause it to crash. Then, it scanned the crash results to identify potential flaws in the program’s code. Next, it verified these flaws and looked for potential patches in a database of known bugs and appropriate fixes. It chose a patch from this repository and applied it, and then analyzed the results to see if it helped. For each patch, the system used artificial intelligence to compare its solution with the results and determine how it should fix similar patches in the future.

During the live competition, some bugs proved more difficult for the machines to handle than others. Several machines found and patched an SQL Slammer-like vulnerability within 5 minutes, garnering applause. But only two teams managed to repair an imitation crackaddr bug in SendMail. And one bot, Xandra by the TECHx team, found a bug that the organizers hadn’t even intended to create.

Whether humans or machines, it’s always nice to see vanquished competitors exhibit good sportsmanship in the face of a loss. As the night wound down, Mechanical Phish politely congratulated Mayhem on its first place finish over the bots’ Twitter accounts.

MIT's lidar chip is smaller than a dime

MIT and DARPA Pack Lidar Sensor Onto Single Chip

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Light detection and ranging, or lidar, is a sensing technology based on laser light. It’s similar to radar, but can have a higher resolution, since the wavelength of light is about 100,000 times smaller than radio wavelengths. For robots, this is very important: Since radar cannot accurately image small features, a robot equipped with only a radar module would have a hard time grasping a complex object. At the moment, primary applications of lidar are autonomous vehicles and robotics, but also include terrain and ocean mapping and UAVs. Lidar systems are integral to almost all autonomous vehicles and many other robots that operate autonomously in commercial or industrial environments.

Lidar systems measure how far away each pixel in a 3D space is from the emitting device, as well as the direction to that pixel, which allows for the creation of a full 3D model of the world around the sensor. The basic method of operation of a lidar system is to transmit a beam of light, and then measure the returning signal when the light reflects off of an object. The time that the reflected signal takes to come back to the lidar module provides a direct measurement of the distance to the object. Additional information about the object, like its velocity or material composition, can also be determined by measuring certain properties of the reflected signal, such as the induced Doppler shift. Finally, by steering this transmitted light, many different points of an environment can be measured to create a full 3D model.

Most lidar systems—like the ones commonly seen on autonomous vehicles—use discrete free-space optical components like lasers, lenses, and external receivers. In order to have a useful field of view, this laser/receiver module is mechanically spun around, often while being oscillated up and down. This mechanical apparatus limits the scan rate of the lidar system while increasing both size and complexity, leading to concerns about long-term reliability, especially in harsh environments. Today, commercially available high-end lidar systems can range from $1,000 to upwards of $70,000, which can limit their applications where cost must be minimized.

Applications such as autonomous vehicles and robotics heavily depend on lidar, and an expensive lidar module is a major obstacle to their use in commercial products. Our work at MIT’s Photonic Microsystems Group is trying to take these large, expensive, mechanical lidar systems and integrate them on a microchip that can be mass produced in commercial CMOS foundries.

Read More

Beyond Pokémon GO: The Secret to a Better Augmented Reality Experience

Whether or not you understand the recent drive to fill the world around you with obnoxious animated characters that you can only see as long as you hold your phone up in front of your face at all times, augmented reality does have the potential to enhance our world in ways that are occasionally useful. However, the AR experience is currently a sterile one, with augmentations overlaid on top of, but not really a part of, the underlying reality.

Abe Davis is a graduate student at MIT who we've written about before in the context of using a candy wrapper and a camera as a microphone. You should absolutely click here for a more intimate introduction to Abe Davis, but if you're not a fan of incredibly nerdy rap music, we'll just move on to Abe's thesis dissertation, which describes interactive dynamic video (IDV). Rather than using 3D graphics to model the motion characteristics of objects, IDV extracts motion information from a small amount of 2D video, and then generates simulations of objects in that video, allowing the augmented part of AR to interact directly with the reality part, turning static objects into objects that you (or your virtual characters) can play with.

Read More

Nigerian Scammers Infect Themselves With Own Malware, Revealing New "Wire-Wire" Fraud Scheme

A pair of security researchers recently uncovered a Nigerian scammer ring that they say operates a new kind of attack called “wire-wire” after a few of its members accidentally infected themselves with their own malware. Over the past several months, they’ve watched from a virtual front row seat as members used this technique to steal hundreds of thousands of dollars from small and medium-size businesses worldwide.  

“We've gotten unprecedented insight into the very nitty-gritty mechanics of their entire operation,” says James Bettke, a researcher at SecureWorks, a subsidiary of Dell focused on cybersecurity. Bettke and Joe Stewart, who directs malware research for SecureWorks, are presenting the details of their findings this week at the annual Black Hat security conference in Las Vegas.

Read More
A cartoon thief carrying a bag with the Bitcoin symbol on it

Hong Kong Bitcoin Exchange BitFinex Loses Nearly 120,000 Bitcoins in Hack

Yesterday afternoon, BitFinex, a Bitcoin exchange in Hong Kong, disabled its customer deposits and withdrawals functions and replaced the trading engine on its website with notification of a major security breach. Later in the day, Zane Tackett, the “Director of Community and Product Development” took to Reddit (under the username “zanetackett”) to confirm that an attack had occurred and that nearly 120,000 bitcoins had been stolen from individual customer accounts.

This latest hack, which amounts to a loss of around US $72 million, is the biggest plundering of a Bitcoin exchange since 2014 when 850,000 bitcoins disappeared from the books during Mark Karpeles’s tenure as CEO of Mt. Gox. As was the case in 2014, the value of the currency is now crashing. The market price of bitcoin, which had begun to steadily increase at the beginning of the summer, fell 15 percent on news of the BitFinex hack.

Read More
stylized drawing of people using laptop, tablet, phone to communicate via Wi-Fi and cellular networks

How Cognitive Radio Can Help LTE-U and Wi-Fi Users Get Along

With cellular networks slammed by increasing demand, mobile operators are hunting for extra radio waves to carry their customers’ wireless signals. One place they’ve set their sights on is an unlicensed patch of spectrum in the 5 GHz band that is primarily used for Wi-Fi. But mobile and Wi-Fi companies have quarreled for years about whether or not adding LTE users to these frequencies would interfere with Wi-Fi networks.

To remedy the situation, new research from Portugal’s Instituto de Telecomunicações describes a strategy that could help LTE and Wi-Fi users to get along: empower crowded Wi-Fi users to hop over to licensed LTE bands when space gets tight on the unlicensed channels. If this approach succeeds in real-life trials, it could help LTE-U overcome some of the political and technical hurdles it has faced since Qualcomm first proposed it in 2012.

The whole kerfuffle began when Qualcomm and cellular service providers including Verizon and T-Mobile argued that shifting some of their LTE traffic over to unlicensed spectrum during busy periods would add extra capacity to their growing networks, and help them provide better, faster coverage for customers. They called the extended network enabled by this strategy LTE-U, for its use of unlicensed spectrum.

But Wi-Fi and cable providers were not pleased, because this meant LTE users would directly compete for batches of unlicensed spectrum (particularly those at 5 GHz) that have traditionally delivered Wi-Fi. Opponents such as Google and the nonprofit Wi-Fi Alliance worried that adding LTE devices to the same frequencies would interfere with Wi-Fi users. 

One reason for their concern is that Wi-Fi-enabled devices are programmed to share bandwidth with other devices that use the same router. These devices are “polite,” meaning they will throttle their owner’s consumption in order to fit another user onto the network, which is why Wi-Fi networks slow down as more people join.

Such chivalry would work against Wi-Fi users, though, if LTE customers suddenly began switching over to LTE-U and using the same unlicensed spectrum. The fear was that since LTE devices do not have any polite sharing mechanism in place, they would greedily claim bandwidth for themselves and leave courteous Wi-Fi users in the dust. The debate about exactly what this would mean for Wi-Fi users has been ongoing, but opponents generally believe it would cause long waiting times and lower throughput.

Faced with this dilemma, Shahid Mumtaz, a wireless researcher at the Instituto de Telecomunicações, developed an approach to help these two groups live in harmony. His technique allows Wi-Fi users to hop onto a licensed LTE band to send data when unlicensed channels become too busy. Mumtaz and his colleagues called it WiFi-Lic, for its use of licensed spectrum.

The group employed cognitive radio technology, which scans for any available spectrum that is not currently occupied by a user, to enable Wi-Fi users to search licensed LTE bands for free space every millisecond. Once a Wi-Fi user identifies a free space in the licensed band, they hop on and start using it. When they’re finished transferring data, or if they detect that a licensed LTE user needs the space, they hop back off.

The group carried out a simulation on the 5 GHz band to demonstrate how such a strategy would work, featuring 50 users who shared 10 Wi-Fi routers and a single 4G LTE base station. Each user was instructed to send a 0.5 MB data file multiple times in experiments that ran for several days.

In the end, the tests with WiFi-Lic achieved 160 percent more throughput than those without it and the waiting period for Wi-Fi users was essentially cut to zero. Based on these results, Mumtaz said he would expect any real-world network to roughly double its capacity by implementing WiFi-Lic.

However, Ravi Balakrishnan, a research scientist at Intel Labs who has worked with cognitive radio in the past but was not involved in this project, is skeptical. He says 1 millisecond (which also happens to be the duration of an LTE subframe, or scheduling period) seems like a very short amount of time for a Wi-Fi user to detect free space, communicate its plan to occupy that space, switch over to it, and transmit data.

“If you can do all of this within 1 millisecond, then you can do this, but I find it very challenging for the amount of hardware complexity that's required,” he says, adding that Wi-Fi users would, at the very least, need additional equipment—which would cost more.

Stefan Schwarz, who heads a wireless research lab at Vienna University of Technology in Austria, also points out that WiFi-Lic is only meant for Wi-Fi networks that are set up by an LTE operator, rather than home networks. And right now, it only works for data traffic—not for live streaming.

Mumtaz says he believes more Wi-Fi networks will be set up by LTE operators in the future, and he plans to further validate WiFi-Lic in a test bed this fall. It’s certainly a good time to be asking these questions: Earlier this year, Qualcomm received approval from the U.S. Federal Communications Commission to test its first pieces of LTE-U equipment. And LTE networks are not going to get any less crowded any time soon.

A judge's gavel on a laptop computer

EFF Sues U.S. Government Over DMCA

Last Thursday lawyers from the Electronic Frontier Foundation filed a lawsuit against the U.S. government on behalf of hardware guru Andrew “bunnie” Huang and computer-security researcher Matthew Green. In a nutshell, the pair allege that parts of the Digital Millennium Copyright Act are unconstitutional. Their objections center on Section 1201 of the DMCA, which makes it illegal to circumvent technical copy-protection schemes or to broadcast to others methods for doing so.

Read More
Blockchain split up

Ethereum Blockchain Forks to Return Stolen Funds

Remember that $60 million dollar blockchain heist that made headlines last month? You know, the one that threatened to throw the Ethereum project (the most successful cryptocurrency after Bitcoin) off the tracks? Well, as of today, it has all been magically fixed. On Wednesday, the users, miners, developers, exchanges, and everyone else who matters on the network embraced a fork of the Ethereum software. That effectively confiscated all of the stolen funds and placed them into a new account programmed to automatically reimburse victims of the theft. 

The maneuver, which was the focus of much philosophical and technical debate, seems to have worked well enough to call it a success. However, not everyone in the network went along with the fork. There are now two versions of the Ethereum blockchain growing in tandem—one (ETH) with the updates to the stolen funds and one (ETHC) that keeps everything as it was. You can see both chains growing here. About 15 percent of miners have continued to mine new blocks on the original Ethereum blockchain. However, none of the major online exchanges are listing or trading the coins generated on the un-forked chain. And so, it could be argued that at this point that those coins have no real value.

The forked chain, on the other hand, is performing well on exchanges. The price of Ethereum’s native currency, Ether, crashed right after The DAO—a smart contract-enabled investment fund—was hacked and drained of 3.6 million Ether. It now seems to be making a slow recovery.

A 13-percent increase in the market value of Ether does indeed signal a renewed faith in the overall viability of the Ethereum project. But, even as it seems to have been a success, the bailout of The DAO is likely to generate a lot of much needed discussion. The hard fork Ethereum pulled off this week marks the first time a protocol change has been written and adopted with the explicit goal of confiscating funds. Next week, we will take a look at what kinds of precedents this could set, both in terms of how the Ethereum community makes controversial decisions and what kinds of social intervention users will accept.

A plot of the physical gate length of transistors, which could stop getting smaller as early as 2021.

Transistors Will Stop Shrinking in 2021, Moore’s Law Roadmap Predicts

After more than 50 years of miniaturization, the transistor could stop shrinking in just five years. That is the prediction of the 2015 International Technology Roadmap for Semiconductors, which was officially released earlier this month. 

After 2021, the report forecasts, it will no longer be economically desirable for companies to continue to shrink the dimensions of transistors in microprocessors. Instead, chip manufacturers will turn to other means of boosting density, namely turning the transistor from a horizontal to a vertical geometry and building multiple layers of circuitry, one on top of another. 

For some, this change will likely be interpreted as another death knell for Moore’s Law, the repeated doubling of transistor densities that has given us the extraordinarily capable computers we have today. Compounding the drama is the fact that this is the last ITRS roadmap, the end to a more-than-20-year-old coordinated planning effort that began in the United States and was then expanded to include the rest of the world.

Citing waning industry participation and an interest in pursuing other initiatives, the Semiconductor Industry Association—a U.S. trade group that represents the interests of IBM, Intel, and other companies in Washington and a key ITRS sponsor—will do its own work, in collaboration with another industry group, the Semiconductor Research Corporation, to identify research priorities for government- and industry-sponsored programs. Other ITRS participants are expected to continue on with a new roadmapping effort under a new name, which will be conducted as part of an IEEE initiative called Rebooting Computing

These roadmapping shifts may seem like trivial administrative changes. But “this is a major disruption, or earthquake, in the industry,” says analyst Dan Hutcheson, of the firm VLSI Research. U.S. semiconductor companies had reason to cooperate and identify common needs in the early 1990’s, at the outset of the roadmapping effort that eventually led to the ITRS’s creation in 1998. Suppliers had a hard time identifying what the semiconductor companies needed, he says, and it made sense for chip companies to collectively set priorities to make the most of limited R&D funding. 

But the difficulty and expense associated with maintaining the leading edge of Moore’s Law has since resulted in significant consolidation. By Hutcheson’s count, 19 companies were developing and manufacturing logic chips with leading-edge transistors in 2001. Today, there are just four: Intel, TSMC, Samsung, and GlobalFoundries. (Until recently, IBM was also part of that cohort, but its chip fabrication plants were sold to GlobalFoundries.)

These companies have their own roadmaps and can communicate directly to their equipment and materials suppliers, Hutcheson says. What’s more, they’re fiercely competitive. “They don’t want to sit in a room and talk about what their needs are,” Hutcheson says. “It’s sort of like everything’s fun and games when you start off at the beginning of the football season, but by the time you get down to the playoffs it’s pretty rough.”

“The industry has changed,” agrees Paolo Gargini, chair of the ITRS, but he highlights other shifts. Semiconductor companies that no longer make leading-edge chips in house rely on the foundries that make their chips to provide advanced technologies. What’s more, he says, chip buyers and designers—companies such as Apple, Google, and Qualcomm—are increasingly dictating the requirements for future chip generations. “Once upon a time,” Gargini says, “the semiconductor companies decided what the semiconductor features were supposed to be. This is no longer the case.”

This final ITRS report is titled ITRS 2.0. The name reflects the idea that improvements in computing are no longer driven from the bottom-up, by tinier switches and denser or faster memories. Instead, it takes a more top-down approach, focusing on the applications that now drive chip design, such as data centers, the Internet of Things, and mobile gadgets. 

The new IEEE roadmap—the International Roadmap for Devices and Systems—will also take this approach, but it will add computer architecture to the mix, allowing for “a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software,” according to a recent press release.

Transistor miniaturization was still a part of the long-term forecast as recently as 2014, when the penultimate ITRS report was released. That report predicted that the physical gate length of transistors—an indicator of how far current must travel in the device—and other key logic chip dimensions would continue to shrink until at least 2028. But since then, 3D concepts have gained momentum. The memory industry has already turned to 3D architectures to ease miniaturization pressure and boost the capacity of NAND Flash. Monolithic 3D integration, which would build layers of devices one on top of another, connecting them with a dense forest of wires, has also been an increasingly popular subject of discussion.

The new report embraces these trends, predicting an end to traditional scaling—the shrinking of chip features—by the early 2020’s. But the idea that we’re now facing an end to Moore’s Law “is completely wrong,” Gargini says. “The press has invented multiple ways of defining Moore’s Law but there is only one way: The number of transistors doubles every two years.” 

Moore’s Law, he emphasizes, is simply a prediction about how many transistors can fit in a given area of IC—whether it’s done, as it has been for decades, in a single layer or by stacking multiple layers. If a company really wanted to, Gargini says, it could continue to make transistors smaller well into the 2020s, “but it’s more economic to go 3-D. That’s the message we wanted to send.” 

There are other changes on the horizon. In the coming years, before 3-D integration is adopted, the ITRS predicts that leading-edge chip companies will move away from the transistor structure used now in high-performance chips: the FinFET. This device has a gate draped around three sides of a horizontal, fin-shaped channel to control the flow of current. According to the roadmap, chipmakers will leave that in favor of a lateral, gate-all-around device that has a horizontal channel like the FinFET but is surrounded by a gate that extends underneath as well. After that, transistors will become vertical, with their channels taking the form of pillars or nanowires standing up on end. The traditional silicon channel will also be replaced by channels made with alternate materials, namely silicon germanium, germanium, and compounds drawn from columns III and V of the old periodic table.

These changes will allow companies to pack more transistors in a given area and so adhere to the letter of Moore’s Law. But keeping to the spirit of Moore’s Law—the steady improvement in computing performance—is another matter. 

The doubling of transistor densities hasn’t been linked to improvements in computing performance for some time, notes Tom Conte, the 2015 president of the IEEE Computer Society and a co-leader of the IEEE Rebooting Computing Initiative.

For a long time, shrinking transistors meant faster speeds. But in the mid-1990’s, Conte says, the extra metal layers that were added to wire up increasing numbers of transistors were adding significant delays, and engineers redesigned chip microarchitectures to improve performance. A decade later, transistor densities were so high that their heat limited clock speeds. Companies began packing multiple cores on chips to keep things moving. 

“We’ve been living in this bubble where the computing industry could rely on the device side to do their job, and so the computer industry and the device industry really had this very nice wall between them,” says Conte. “That wall really started to crumble in 2005, and since that time we’ve been getting more transistors but they’re really not all that much better.” 

This crumbling wall was a strong motivation for the IEEE Rebooting Computing Initiative to begin collaborating with the ITRS last year, before the launch of the IRDS. “I like to say we could see the light at the end of the tunnel, and we knew it was an oncoming train,” says Conte.

The initiative held a summit last December that covered a gamut of potential future computing technologies, including new kinds of transistors and memory devices, neuromorphic computing, superconducting circuitry, and processors that use approximate instead of exact answers.

The first international Rebooting Computing conference will be held in October this year; IRDS meetings will coincide with such events, Conte says. The IRDS will still track “Moore’s Law to the bitter end,” Conte explains. But the roadmapping focus has changed: “This isn’t saying this is the end of Moore’s Law,” he says. “It’s stepping back and saying what really matters here—and what really matters here is computing.”


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More