Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Microsoft’s Brad Smith on How to Responsibly Deploy AI

Microsoft’s president talks about the promise and perils of artificial intelligence

4 min read

A photo of Brad Smith of Microsoft speaking on stage at Columbia University
Photo: Tim Lee

AI can reveal how many cigarettes a person has smoked based on the DNA contained in a single drop of their blood, or scrutinize Islamic State propaganda to discover whether violent videos are radicalizing potential recruits.

Because AI is such a powerful tool, Microsoft president Brad Smith told the crowd at Columbia University’s recent Data Science Day that tech companies and universities performing AI research must also help ensure the ethical use of such technologies.

AI is now an invisible but inextricable part of life for hundreds of millions of people. The rise of machine learning algorithms combined with cloud computing services has put massive computer power at the fingertips of companies and customers worldwide.

These trends have also enabled the rise of data science that applies AI methods to constantly analyze information from online services and Internet-connected devices. In his talk, Smith emphasized the need for policies and laws that hold these systems and machines accountable to humans.

“We are the first generation of people on this planet, in the history of this planet, to give machines this kind of power,” Smith said. “It is up to us to make sure machines remain accountable to people. If we fail, we’re going to do a huge disservice to every generation that follows us.”

Invoking George Orwell’s dystopian novel 1984, Smith said it was important to protect democratic freedoms such as the rights of assembly and free speech when considering how and whether to deploy facial recognition for law enforcement and security purposes.

“We are entering a world where ubiquitous cameras, the cloud, and facial recognition… can fundamentally create mass surveillance on an unprecedented scale,” Smith said. “A scale that is unprecedented but not unimaginable.”

To prevent abuse of mass surveillance, Smith had previously proposed laws that only allow law enforcement to use facial recognition for ongoing surveillance of specific individuals when police get a court order or during emergencies involving imminent risk of death or serious injury. In his talk at Columbia, he also reiterated his suggestion that companies be required to give “conspicuous notice” to individuals when companies use facial recognition. In March 2019, Washington state senators passed a bill regulating the use of facial recognition that incorporated many of Microsoft’s recommendations.

Microsoft has also defined its corporate position on “killer robots” and the military deployment of AI that could extend to autonomous weapons, Smith said. Despite some resistance from AI researchers and Silicon Valley engineers, he argued that the company’s commitment to supporting U.S. military projects allows it to shape policies that ensure responsible and ethical use of AI on the battlefield. By comparison, Google has attracted the ire of U.S. military leaders and President Donald Trump by bowing to internal employee pressure and withdrawing from some Pentagon projects.

At Columbia, Smith also emphasized the need for interdisciplinary and multidisciplinary experts to weigh in on the creation of AI technologies. “I think we’re rapidly entering a world where every single computer and data scientist is going to need to learn more ethics and philosophy, and social science is part of your degree,” he said. “And every liberal arts major will need [to know] something about computer and data science.”

Columbia’s Data Science Institute, which began as part of the university’s engineering school in 2012, expanded five years later into an interdisciplinary research institute spanning 11 schools. The recent Data Science Day featured researchers with expertise in business, law, international and public affairs, environmental health sciences, biomedical informatics, and journalism.

The projects on display provided a peek into the power of AI technologies—and the variety of potentially thorny issues of responsible and ethical use.

“We are the first generation of people on this planet, in the history of this planet, to give machines this kind of power.”

In one project, machine learning algorithms analyze the epigenomic record of how certain factors can switch genes on and off and predict a person’s history of cigarette smoking and estimate lifetime exposure to toxic lead, according to Andrea Baccarelli, an endocrinologist and director of the Laboratory of Precision Environmental Biosciences at Columbia.

Elsewhere, George Hripcsak, director of medical informatics services for New York-Presbyterian Hospital’s Columbia campus, discussed a global research project called Observational Health Data Sciences and Informatics (OHDSI) that aims to analyze the effectiveness of different drug treatments across half a billion patient records.

Applying AI to better manage the healthcare industry can also create complex ethical scenarios. Carri Chan, associate professor of business at Columbia University, used machine learning and predictive modeling to help hospitals in northern California decide when to proactively transfer certain patients from general hospital wards to the intensive care unit (ICU)—a decision with implications for both the patients’ health and the facility’s bottom line. Her work showed how hospitals could reduce both the chance of death and the average length of hospitalization by proactively admitting patients to the ICU if their patient risk scores reached the highest two levels of severity.

Another group used AI and big data to try to better understand extremist behavior. Tamar Mitts, an assistant professor of international and public affairs at Columbia University, used deep learning to scan for violent content in Islamic State propaganda videos shared on social media and analyzed the impact of those videos on Twitter users.

Her study showed that executions and other violent content actually represented a turnoff for most followers of Twitter accounts associated with Islamic State, although not for the most fanatical followers. Meanwhile, social media giants still struggle to deploy AI tools to police violent content, and critics point to algorithm-driven recommendations that can radicalize people on YouTube and other platforms.

Even the simplest algorithms, if deployed irresponsibly or maliciously, can exacerbate inequalities or cause harm on a large scale. Smith called for companies and governments alike to implement policies and regulations where the ethical decisions are already clear—and to continue advancing debate in those cases where people have yet to agree on a course of action.

“After a millennia of debate about ethics, there is no universal consensus and we’re not going to arrive at one overnight because computers are needing to make ethical decisions as well,” Smith said. “But it makes it really urgent to have a conversation about the ethical principles that will guide artificial intelligence.”

This story was updated on 15 April 2019.

The Conversation (0)