Tech Talk iconTech Talk

A bright red illustration of a computer chip with the outline of a hacker wearing a hat inscribed on the front.

Wanted: Smart Public Policy for Internet of Things Security

Without us even knowing it, the connected devices in our homes and businesses can carry out nefarious tasks. Increasingly, the Internet of Things has become a weapon in hackers’ schemes. This is possible in large part due to manufactures’ failure to program basic security measures into these devices.

Now, experts in the U.S. are asking regulators to step in. Calls for public policy to improve device security have reached a fever pitch following a series of high-profile denial-of-service attacks leveraged in part by unsuspecting DVRs, routers, and webcams. In October, hackers flooded the Internet service company Dyn with traffic by assembling millions of IoT devices into a virtual botnet using a malicious program called Mirai.

Read More
NYU students Yunchou Xing and George MacCartney, pictured outside of a van in a rural Virginia field on a clear summer day, adjust the horn antenna of their receiver to find the strongest signal during a millimeter wave measurement campaign in August.

Millimeter Waves Travel More Than 10 Kilometers in Rural Virginia 5G Experiment

A key 5G technology got an important test over the summer in an unlikely place. In August, a group of students from New York University packed up a van full of radio equipment and drove for ten hours to the rural town of Riner in southwest Virginia. Once there, they erected a transmitter on the front porch of the mountain home of their professor, Ted Rappaport, and pointed it out over patches of forest toward a blue-green horizon.

Then, the students spent two long days driving their van up and down local roads to find 36 suitable test locations in the surrounding hills. An ideal pull-off would have ample parking space on a public lot, something not always easily available on rural backroads. At each location, they set up their receiver and searched the mountain air for millimeter waves emanating from the equipment stacked on the front porch.   

To their delight, the group found that the waves could travel more than 10 kilometers in this rural setting, even when a hill or knot of trees was blocking their most direct route to the receiver. The team detected millimeter waves at distances up to 10.8 kilometers at 14 spots that were within line of sight of the transmitter, and recorded them up to 10.6 kilometers away at 17 places where their receiver was shielded behind a hill or leafy grove. They achieved all this while broadcasting at 73 Gigahertz (GHz) with minimal power—less than 1 watt.

"I was surprised we exceeded 10 kilometers with a few tens of milliwatts,” Rappaport says. “I expected we'd be able to go a few kilometers in non-line-of-sight but we were able to go beyond ten."

The 73 GHz frequency band is much higher than the sub-6 GHz frequencies that have traditionally been used for cellular signals. In June, the Federal Communications Commission opened 11 GHz of spectrum in the millimeter wave range (which spans 30 to 300 GHz) to carriers developing 5G technologies that will provide more bandwidth for more customers.

Rappaport says their results show that millimeter waves could potentially be used in rural macrocells, or for large cellular base stations. Until now, millimeter waves have delivered broadband Internet through fixed wireless, in which information travels between two stationary points, but they have never been used for cellular.

Robert Heath, a wireless expert at the University of Texas at Austin, says the NYU group’s work adds another dimension to 5G development. “I think it's valuable in the sense that a lot of people in 5G are not thinking about the extended ranges in rural areas, they're thinking that range is, incorrectly, limited at high carrier frequencies,” Heath says.

In the past, Rappaport’s group has shown that a receiver positioned at street level can reliably pick up millimeter waves broadcast at 28 GHz and 73 GHz at a distance of up to 200 meters in New York City using less than 1 watt of transmitter power—even if the path to the transmitter is blocked by a towering row of buildings.

Before those results, many had thought it wasn’t possible to use millimeter waves for cellular networks in cities or in rural regions because the waves were too easily absorbed by molecules in the air and couldn’t penetrate windows or buildings. But Rappaport’s work showed that the tendency of these signals to reflect off of urban surfaces including streets and building facades was reliable enough to provide consistent network coverage at street level—outside, at least.

Whether or not their newest study will mean the same for millimeter waves in rural areas remains to be seen. Rappaport says the NYU team is one of the first to explore this potential for rural cellular, and he feels strongly that it could soon be incorporated into commercial systems for a variety of purposes including wide-band backhaul and as a replacement for fiber.

"The community has always been mistaken, thinking that millimeter waves don't go as far in clear weather and free space—they travel just as far as today’s lower frequencies if antennas have the same physical size,” Rappaport says. "I think it's definitely viable for mobile.”

Others aren’t convinced. Gabriel Rebeiz, a professor of electrical and computer engineering who leads wireless research at the University of California, San Diego, points out that the NYU group ran their tests on two clear days. Rain can degrade 73-GHz signals at a rate of 20 decibels per kilometer, which is equivalent to reducing signal strength 100-fold for every kilometer traveled.

“Rain at 73 GHz has significant, significant, unbelievable attenuation properties,” he says. “At these distances, the second it starts raining—I mean, misting, if it just mists—you lose your signal.”

Rebeiz says signals would hold up better at 28 GHz, only degrading 6 to 10-fold over a range of 10 kilometers. Millimeter waves will ultimately be more useful in cities, he says,  but he doubts they will ever make sense for rural cellular networks: “It’s not going to happen. Period.”

George R. MacCartney Jr., a fourth-year Ph.D student in wireless engineering at NYU, thinks millimeter waves could perhaps be used to serve rural cellular networks in five or 10 years, once the technology has matured. One challenge is that future antennas must aim a signal with some precision to make sure it arrives at the user. That’s because millimeter waves reflect off of objects, and can take multiple paths from transmitter to receiver. But as for millimeter waves making their rural cellular debut in the next few years—“I'd say I'm a little skeptical just because you'd have to have a lot of small antenna elements and you'd have to do a lot of beamforming and beam steering,” he says.

By collecting rural measurements for millimeter waves, the NYU experiment was designed to evaluate a propagation model that the standards group called the 3rd Generation Partnership Project (3GPP) has put forth for simulating millimeter waves in rural areas. That model, known as 3GPP TR 38.900 Release 14, tries to figure out the strength of a millimeter wave signal once it’s emitted from a rural base station according to factors such as height of the cell tower, height of the average user, height of any buildings in the area, street width, and the frequency used to broadcast it.

The NYU group suggests that because this model was “hastily adopted” from an earlier one used for lower frequencies, it’s ill-suited to accurately predict how higher frequencies behave. Therefore, according Rappaport’s team, the model will likely predict greater losses at longer distances than actually occur. Rappaport prefers what’s called a close-in (CI) free-space reference distance model, which better fits his measurements. A representative of 3GPP was not available for comment.

In October, Rappaport presented the group's work at the Association of Computing Machinery’s MobiCom conference and their latest study will be published in the proceedings. In the meantime, it is posted to arXiv.

Planetary Resources’ President & CEO Chris Lewicki and Luxembourg’s Deputy Prime Minister Etienne Schneider celebrate the partnership.

Luxembourg Invests €25 million in Asteroid Mining

Luxembourg has agreed to invest €25 million in asteroid mining company Planetary Resources.

“It’s really big news,” says angel investor Chad Anderson, the managing director of Space Angels Network—one of Planetary Resource’s early investors.

Several companies, among them Planetary Resources and Deep Space Industries, plan on mining space for its riches. Planetary Resources aims to launch the first commercial asteroid prospecting mission by 2020. Near-Earth Asteroids are an untapped reserve of rocket fuel, materials, and minerals, the company explains on its website—although there is some disagreement over whether suitable asteroids are actually available.

Luxembourg was the first European Union country to set up a legal framework for space mining, following the United States Commercial Space Launch Competiveness Act in 2015. In June, the Luxembourg government announced a €200 million fund for enticing asteroid mining companies, as Fortune reports.

In a press release yesterday, Planetary Resources announced that a deal with the Government of the Grand Duchy of Luxembourg and Société Nationale de Crédit et d’Investissement--a banking institution--had been finalized.

€12 million will be a direct capital investment and €13 million will come in the form of grants. Planetary Resources will set up a Luxembourg office, SNCI will take a public equity position in Planetary Resources, and an advisory board member of the initiative will join the company’s Board of Directors, according to the release.

“Asteroid mining is an expensive endeavor,” Anderson says. “Getting this funding is a real benefit to their efforts.”

Planetary Resources and Paul Zenners, Luxembourg’s Ministère de l'Économie, did not respond to requests for further comment.

Amara Graps, a planetary scientist, asteroid mining advocate, and independent consultant for the Luxembourg Ministry of Economy who lives in Riga, Latvia, says “They can put more attention into the asteroid mining business” instead of getting “detoured” by monetary constraints.

Graps says the money is a runway for Planetary Resources to focus on characterizing and identifying proper asteroids as well as refining sensor and propulsion technologies, for example.

She did wonder if this announcement could potentially influence the decision of venture capitalists to invest because of a perceived risk of government involvement.

“Quite the opposite,” says Anderson.

He says the government is only an equity holder with the same stakes as other investors and it would not dictate how the business operates. Because the investment increases credibility, “I think this will encourage other investors to come on.”

“Luxembourg has stepped up,” he says.

Do the perceived benefits of online voting outweigh the risks?

The Security Challenges of Online Voting Have Not Gone Away

Online voting is sometimes heralded as a solution to all our election headaches. Proponents claim it eliminates hassle, provides better verification for voters and auditors, and may even increase voter turnout. In reality, it’s not a panacea, and certainly not ready for use in U.S. elections.

Recent events have illustrated the complex problem of voting in the presence of a state-level attacker, and online voting will make U.S. elections more vulnerable to foreign interference. In just the past year, we have seen Russian hackers exfiltrate information from the Democratic National Committee and probe voter databases for vulnerabilities, prompting the U.S. government to formally accuse Russia of hacking.

In light of those events, the U.S. Department of Homeland Security may soon classify voting systems as critical infrastructure, underscoring the significant cybersecurity risks facing American elections. Internet voting would paint an even more attractive target on the ballot box for Russian adversaries with a record of attempting to disrupt elections through online attacks.

In the face of such an adversary, the few online voting trials that have been carried out in the U.S. do not inspire confidence. In 2010, Washington, D.C. ran a pilot of an online voting system and invited security experts to try to breach the system. Hackers changed all the votes in fewer than 48 hours. The 2016 Utah GOP Caucus included an online voting option that was rife with procedural mistakes that prevented an estimated 10,000 Utahns from using the system.

Online voting has also been conducted during live elections in places like Estonia, Norway, and Australia. It is hard to know the degree of security attained in these elections, because vendors and officials have no incentive to disclose suspected breaches. However, independent researchers discovered vulnerabilities in both the 2015 New South Wales online election and in Estonia’s system in a 2013 study. Among the problems that were discovered: exploitable vulnerabilities in the connections between voters’ computers and election servers, as well as procedural and architectural weaknesses that could allow state-level attackers like Russia to manipulate entire elections.

Voting is an unusually difficult security problem, because officials must guarantee a correct result while simultaneously ensuring that voters’ choices remain private—and all without  being able to trust any individual participants to act impartially. Furthermore, the election has to produce a result on election day, and we cannot delay voting or rerun the election if the system comes under attack. These requirements mean that traditional online security techniques, like those used to protect banking and commerce, are insufficient for elections.

Today, the vast majority of secure Internet communication takes place using Transport Layer Security (TLS), a cryptographic protocol in which vulnerabilities continue to be found. Three times in the past two years, researchers uncovered TLS flaws that could compromise up to one-third of popular sites. If an online voting system were among the susceptible sites, attackers might be able to intercept votes, discover how individuals voted, prevent votes from being cast, or even change votes.

For another sobering example of what might go wrong with online voting, look no further than the Mirai botnet attack which just last month interrupted access to many of the Web’s most popular sites. Had the target been an online election, large portions of the country would have been unable to vote.

Even if the election servers and communication channels are secure, online elections rely critically on the security of the devices voters use to vote. That’s a problem, because up to 30 percent of computers in the U.S. are already infected with malicious software, and malware could prevent ballots from being transmitted or replace them with entirely different votes.

Beyond these obstacles, an online voting system needs to securely authenticate voters’ identities. In Estonia—a country less populous than 41 U.S. states—this is accomplished using cryptographic chips embedded in every citizen’s national ID card which they scan using a card reader that they can attach to their laptops. We have no similar infrastructure in the United States, and a significant number of eligible voters lack any form of government-issued identification.

Overcoming these security challenges remains an area of active research. Computer scientists have proposed promising techniques for securing online elections based on advanced cryptography. It would let voters confirm that their votes were properly counted, without indicating to anyone else exactly how they voted. However, no technique has yet been demonstrated to be both practical enough for use by real voters and sufficient to protect against a well-resourced nation-state. There even remains considerable controversy amongst security and privacy researchers about what it means for an online election to be secure.

Even ignoring the security risks, the benefits of Internet voting are less certain than was once believed. Evidence from Estonia—including 1.5 percent rise in overall voter turnout due to online voting—suggests that most voters would have cast ballots even without Internet voting. Internet voting seems to primarily make voting easier for those who vote already. What is certain is that online voting would make it easier for external players to tamper with elections.

In light of the uncertain benefits of voting online, it is crucial that we in the United States not rush to entrust our democracy to it. Some of the most difficult unsolved problems in computer security stand in the way: authenticating remote users, protecting home computers from malware, safeguarding online communication, preventing denial-of-service attacks, and protecting critical infrastructure from nation-state attackers. These challenges are among the most exciting and important in computer science and engineering—and many are striving to address them—but it may be decades, if ever, before they are solved to the level that we can vote online with confidence.

Robert Cunningham is chair of the IEEE Cybersecurity Initiative. Matthew Bernhard is a second-year computer science Ph.D student focused on security issues at the University of Michigan and tweets from @umbernhard J. Alex Halderman is a professor of computer science and engineering at the University of Michigan and director of Michigan's Center for Computer Security and Society.

Interior view of smart mailbox with hardware components

RFID + Camera + Lock = Smart Mailbox

A self-locking mailbox could someday flag down delivery drones and intelligently screen your driveway for intruders.

Columbus State University computer scientist Lydia Ray presented the technology, called the ADDSMART project, during a 20 October session at the annual IEEE Ubiquitous Computing, Electronics, and Mobile Communication Conference in New York City.

The project aims to achieve two goals: clearly marking addresses for autonomous vehicles, and reducing the energy and data storage costs of home surveillance systems. An early prototype mailbox attachment suggests that the trick, in both cases, may be radio-frequency identification.

Powered by an Arduino Yun processor, one component of the ADDSMART device controls a high-frequency 13.56-MHz RFID reader, USB camera, passive-infrared motion sensor, solenoid lock, and an onboard Wi-Fi module. The second component is an RFID tag. 

Ray came up with the idea when she saw an Amazon ad for drones delivering packages. She wondered how that would be possible, as some of her regular mail still arrives at the wrong address.

In the United States, Amazon and Google and startups such as the Reno, Nev.–based Flirtey, are trying delivery via drones. One of a drone’s challenges is to home in on its destination. But accurately identifying addresses with standard GPS alone is really difficult, Ray says, because GPS uses latitude and longitude. The GPS sensor is good for identifying a location—but an additional system is needed for pinpointing a precise address.

Some approaches for tackling the location problem include computer vision techniques with cameras. Ray points out that even identifying addresses with human vision can be hard. At her house, the address is written on the pavement and “is not easily identifiable.” Then, Google Street View, which updates infrequently at best, doesn’t show that her neighbor’s house recently changed colors; and it wouldn’t even work so well for finding an address at night.

With an RFID tag on a home’s mailbox and an RFID reader on a drone or car, Ray believes that the delivery process could become relatively easy. The drone would use GPS to navigate to an address and then confirm that the address is correct by checking the RFID tag.

Once Ray decided to attach an RFID tag to a mailbox, she realized that RFID can do more than flag down drones: it offers security, too. An RFID-reader-equipped system could store a list of “safe” RFID tags whose possessors would be able to pass by a home or open the mailbox unimpeded. 

Instead of a home surveillance system continuously checking for intruders, a video camera could save energy by starting to record only when an unrecognized vehicle or person passes the mailbox. The mailbox could also unlock when authorized users—such as a homeowner or mail carrier—arrive.

After soldering and wiring the necessary hardware for the smart mailbox and writing computer scripts for running commands, Ray and her student, Jonathan Ross Tew, tested the sensors indoors and outdoors. 

When the motion sensor detected a change in passive infrared radiation—a type of electromagnetic radiation given off by anything warmer than about -270°C—the USB camera took a picture. Computer scripts sent the picture via email to a recipient and uploaded it to Dropbox.

Also, when an RFID tag was in the RFID reader’s limited detection range, the system checked whether the tag was marked with the homeowner ID or postman ID. In either case, it would open the solenoid lock, but the postman tag also triggered an email alert.

Ray said the passive-infrared motion sensor the team used pretty much failed outdoors—there were 931 false positives out of 937 tries. Using a more expensive sensor could help, she said.

Future work includes testing the system under various conditions and investigating the total area of surveillance coverage, technical interference with nearby smart mailboxes (like this Kickstarter project, Mr. Postman), security, and privacy.

So how will smart mailboxes flag down drones in apartment and condo complexes with cluster mailboxes? Ray told IEEE Spectrum that each individual mailbox could have its own RFID tag with an apartment number or post box number for flagging down drones. As for the surveillance function, Ray says a monitoring system similar to the one used in the prototype could monitor the door of each apartment.

ZCash Will Be a Truly Anonymous Blockchain-Based Currency

Last Friday, I was in a van in Denver, Colorado with Zooko Wilcox the CEO of ZCash, a company that on 28 October will launch a new blockchain-based digital currency of the same name. On the floor next to me was a bunch of newly purchased computer equipment. I knew we were going to a hotel, but didn’t know where. I only knew that I’d be there for the next two days straight and that it would be my job to watch, ask questions, stave off sleep, and document as much as I possibly could.

That day began a cryptographic ceremony of sorts, one that will make or break a new digital currency. ZCash is identical to Bitcoin in a lot of ways. It’s founded on a digital ledger of transactions called a blockchain that exists on an army of computers that can be anywhere in the world. But it differs from Bitcoin in one critical way: It will be completely anonymous. Although privacy was a motivating factor for Bitcoin’s flock of early adopters, it doesn’t deliver the goods. For those who want to digitally replicate the experience of slipping on a ski mask and handing over an envelope of unmarked bills, ZCash is the new way to go. 

Read More
Illustration of a padlock

Which Path to IoT Security? Government Regulation, Third-Party Verification, or Market Forces

On Friday, a series of distributed denial-of-service attacks hit Dyn, a company that provides a form of traffic control for popular websites, and interrupted some users’ access to sites including Github, Twitter, and Netflix. Since then, it has become clear that these attacks were made possible by security vulnerabilities in millions of devices within the Internet of Things.

On Monday at the National Cyber Security Alliance’s Cybersecurity Summit in New York City, industry leaders from security firms, Internet service providers, and device manufacturers fretted over the implications. Panelists spoke about the existential dangers that companies in the fast-growing IoT sector face if they continue to fail to secure these devices and debated ways in which the industry can improve security within this ecosystem.

“Friday showed us that the genie is well out of the bottle at this point,” said Andrew Lee, CEO at security company ESET North America. “This should probably be the wake-up call to manufacturers to start taking this seriously.”

While it’s still not clear who executed Friday’s attacks, Dyn has announced that hackers orchestrated it across “tens of millions” of IP addresses gathered through Mirai, malware that scans the Internet for connected devices with weak security. The malware then enlists these devices into a massive global network called a botnet. Increasingly, hackers have used these networks to launch distributed denial-of-service attacks, in which they instruct many devices to send traffic to a target at once in order to overload its capacity and prevent real users from accessing a website or service.

Read More
A close-up image of a finger pushing a red key titled "DDoS," which stands for distributed denial-of-service attacks, on a white keyboard.

What Is a Distributed Denial-of-Service Attack and How Did It Break Twitter?

On Friday, multiple distributed denial-of-service (DDoS) attacks hit the Internet services company Dyn. The cyberattack prevented many users on the U.S. East Coast from navigating to the most popular websites of Dyn customers, which include Twitter, Reddit, and Netflix.

Dyn detected the first attack at 7:10 a.m. Eastern time on Friday and restored normal service about two hours later. Then at 11:52 a.m. ET, Dyn began investigating a second attack. By 2:00 p.m., the company said it was still working to resolve “several attacks” at once.

The interruptions inconvenienced many Internet users, and the daily operation of Internet giants in entertainment, e-commerce, and social media. There still aren’t many details available about Dyn’s predicament, and the company did not immediately respond to an interview request. But we do know from Dyn’s posts that the first two assaults on its network were DDoS attacks. Its customers’ outages again show that major Internet companies remain vulnerable to this common hacker scheme—one that has plagued networks since 2000.

A denial-of-service attack aims to slow or stop users from accessing content or services by impeding the ability of a network or server to respond to their requests. The word “distributed” means that hackers executed the Dyn attacks by infecting and controlling a large network of computers called a botnet, rather than running it from a single machine that they own.

Hackers can assemble a botnet by spreading malware, which is often done by prompting unsuspecting users to click a link or download a file. That malware can be programmed to periodically check with a host computer owned by hackers for further instructions. To launch an attack, the hackers, or bot-herders, send a message through this “command and control” channel, prompting infected computers to send many requests for a particular website, server, or service all at once. Some of the biggest botnets in history have boasted 2 million computers, capable of sending up to 74 billion spam emails a day.

The sudden onslaught of requests quickly gobbles up all the network's bandwidth, disk space, or processing power. That means real users can’t get their requests through because the system is too busy trying to respond to all the bots. In the worst cases, a DDoS can crash a system, taking it completely offline.

Both of Friday’s attacks targeted Dyn’s Managed Domain Name System. Through this system, Dyn provides a routing service that translates Web addresses that users type into a browser, such as Users who type in a Web address are first sent through a Dyn server that looks up the IP address for a server that hosts the content the user is trying to reach. The Dyn server passes this information on to the user's browser.

To disrupt this process, says Sanjay Goel, a professor of information technology at the State University of New York (SUNY) at Albany, the bot-herders probably sent tons of translation requests directly to Dyn’s servers by looking up the servers’ IP addresses. They could have also simply asked the bots to send requests for and to cause similar issues. Attacking a DNS or a content delivery provider such as Dyn or Akamai in this manner gives hackers the ability to interrupt many more companies than they could by directly attacking corporate servers, because several companies share Dyn's network.

In Dyn’s case, it has built its Managed DNS on an architecture called Anycast, in which any particular IP address for a server in its system can actually be routed through servers in more than a dozen data centers. So, if the IP address of one server is targeted, 10 others may still be able to handle the normal traffic while it's beseiged with bot requests. Art Manion, a technical manager at Carnegie Mellon University’s Software Engineering Institute, says this system should make Dyn more resilient to DDoS attacks, and the company has touted it as highly secure.

Dyn said on Friday in an update to its website that the first attack mainly impacted services in the “US East.” The Anycast network includes data centers in Washington, D.C., Miami, and Newark, N.J., as well as in Dallas and Chicago, though it’s not clear whether these locations were specifically targeted.    

Even in the affected region, only certain users experienced issues. One reason could be that other users' browsers had previously used Dyn to locate the specific server they needed to recover, say, Because that information is now cached in their browsers, those users can bypass Dyn to fetch the desired content, so long as the servers that store Twitter’s website are still functioning.

Another reason for the inconsistent impacts could be that a common mechanism for handling DDoS attacks is to simply drop every fifth request from the queue in order to relieve the network of traffic. The result: Some requests from legitimate users wind up being dropped along with those from bots.

Once an attack begins, companies can bring backup servers online to manage the blizzard of requests. Victims can also work with Internet service providers to block the IP addresses of devices generating the most traffic, which means that they're likely part of the botnet. "You start blocking the different addresses where it's coming from, so depending on how massive the botnet is, it may take some time," says SUNY Albany's Goel.

Increasingly, bot-herders have recruited Internet of Things devices, which often have poor security, to their ranks. This allows them to launch ever more powerful attacks because of the sheer numbers of such devices. Two of the largest DDoS attacks on record have occurred within the past two months: first, a 620-gigabit-per-second attack targeting independent security reporter Brian Krebs of, and then a 1,100-Gb/s siege on the French hosting company OVH.

Even with state-of-the-art protections and mitigation strategies, companies are limited by the amount of bandwidth they have to handle such sudden onslaughts. “Ultimately, Akamai has total x amount of bandwidth, and if the attacker is sending x-plus-10 traffic, the attacker still wins,” says Carnegie Mellon's Manion. “It mathematically favors whoever has more bandwidth or more traffic, and the attackers today can have more traffic.”

Dyn’s global network manages over 500 billion queries a month, so the culprits would have had to send many millions or even billions of requests simultaneously in order to stall it. Manion says that to prevent DDoS attacks, companies must address root causes such as poor IoT security, rather than scrambling to stop them once they’ve begun.

Stanford University Ising Machine

New Computer Combines Electronic Circuits with Light Pulses

Modern computers still lack the capability to find the best solution for the classic “traveling salesman” problem. Even finding approximate solutions is challenging. But finding the shortest traveling salesman route among many different cities is more than just an academic exercise. This class of problems lies at the heart of many real-world business challenges such as scheduling delivery truck routes or discovering new pharmaceutical drugs. 

Read More
Stanford computer science professor calls for greater ethnic and gender diversity in artificial intelligence

Computer Vision Leader Fei-Fei Li on Why AI Needs Diversity

As Fei-Fei Li sees it, this is a historical moment for civilization fueled by an artificial intelligence revolution. “I call everything leading up to the second decade of the twenty-first century AI in-vitro,” the Stanford computer science professor told the audience at last week’s White House Frontiers Conference. Heretofore, the technology was being fundamentally understood, formulated, and tested in labs. “At this point we’re going AI in-vivo,” she said. “AI is going to be deployed in society on every aspect of industrial and personal needs.”

It’s already around us in the form of Google searches, voice-recognition, and autonomous vehicles. Which makes this a critical time to talk about diversity.

The lack of diversity in AI is representative of  the state of computer science and the tech industry in general. In the United States, for example, women and ethnic minorities such as African-Americans and Latinos are especially underrepresented. Just 18 percent of computer science grads today are women, down from a peak of 37 percent in 1984, according to The American Association of University Women. The problem is worse in AI. At the Recode conference this summer, Margaret Mitchell, the only female researcher in Microsoft’s cognition group, called it “a sea of dudes.” 

But the need for diversity in AI is more than just a moral issue. There are three reasons why we should think deeply about increasing diversity in AI, Stanford’s Li says.

The first is simply practical economics. The current technical labor force is not large enough to handle the work that needs to be done in the fields of computing and AI. There isn’t much in the way of specific numbers on diversity in AI, but anecdotal evidence say they’d probably be dismal. Take, for instance, Stanford’s computer science department. AI has the smallest percentage of women undergrads, at least as compared to tracks like graphics or human-computer interaction, Li points out. Worldwide, the GDP from automation and machine learning is expected to rise. So it’s really important that more people study AI, and that they come from diverse backgrounds. “No matter what data we look at today, whether it’s from universities or companies, we lack diversity,” she says.

Another reason diversity should be emphasized is its impact on innovation and creativity. Research repeatedly shows that when people work in diverse groups, they come up with more ingenuous solutions. AI will impact many of our most critical problems, from urban sustainability and energy to healthcare and the needs of aging populations. “We need a diverse group of people to think about this,” she says.

Last, but certainly not the least, is justice and fairness. To teach computers how to identify images or recognize voices, you need massive data sets. Those data sets are made by computer scientists. And if you only have seas of (mostly white) dudes making those data sets, biases and unfairness inadvertently creep in. “Just type the word grandma in your favorite search engine and you’ll see the bias in pictures returned,” Li says. “You’ll see the race bias. If we’re not aware of the bias of data, we’re going to start creating really problematic issues.”

What can we do about this? Bring a humanistic mission statement to the field of AI, Li says. “AI is fundamentally an applied technology that’s going to serve our society,” she says. “Humanistic AI not only raises the awareness of the importance of the technology, it’s a really important way to attract diverse students, technologists and innovators to participate.”


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More