Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Stop Calling Everything AI, Machine-Learning Pioneer Says

Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent

6 min read

Michael I. Jordan
Photo: Peg Skorpinski

THE INSTITUTE Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.

He notes that the imitation of human thinking is not the sole goal of machine learning—the engineering field that underlies recent progress in AI—or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

“People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans," he says. “We don't have that, but people are talking as if we do."

Jordan should know the difference, after all. The IEEE Fellow is one of the world's leading authorities on machine learning. In 2016 he was ranked as the most influential computer scientist by a program that analyzed research publications, Science reported. Jordan helped transform unsupervised machine learning, which can find structure in data without preexisting labels, from a collection of unrelated algorithms to an intellectually coherent field, the Engineering and Technology History Wiki explains. Unsupervised learning plays an important role in scientific applications where there is an absence of established theory that can provide labeled training data.

Jordan's contributions have earned him many awards including this year's Ulf Grenander Prize in Stochastic Theory and Modeling from the American Mathematical Society. Last year he received the IEEE John von Neumann Medal for his contributions to machine learning and data science.

In recent years, he has been on a mission to help scientists, engineers, and others understand the full scope of machine learning. He says he believes that developments in machine learning reflect the emergence of a new field of engineering. He draws parallels to the emergence of chemical engineering in the early 1900s from foundations in chemistry and fluid mechanics, noting that machine learning builds on decades of progress in computer science, statistics, and control theory. Moreover, he says, it is the first engineering field that is humancentric, focused on the interface between people and technology.

“While the science-fiction discussions about AI and super intelligence are fun, they are a distraction," he says. “There's not been enough focus on the real problem, which is building planetary-scale machine learning–based systems that actually work, deliver value to humans, and do not amplify inequities."


As a child of the '60s, Jordan has been interested in philosophical and cultural perspectives on how the mind works. He was inspired to study psychology and statistics after reading British logician Bertrand Russell's autobiography. Russell explored thought as a logical mathematical process.

“Thinking about thought as a logical process and realizing that computers had arisen from software and hardware implementations of logic, I saw a parallel to the mind and the brain," Jordan says. “It felt like philosophy could transition from vague discussions about the mind and brain to something more concrete, algorithmic, and logical. That attracted me."

Jordan studied psychology at Louisiana State University, in Baton Rouge, where he earned a bachelor's degree in 1978 in the subject. He earned a master's degree in mathematics in 1980 from Arizona State University, in Tempe, and in 1985 a doctorate in cognitive science from the University of California, San Diego.

When he entered college, the field of machine learning didn't exist. It had just begun to emerge when he graduated.

“While I was intrigued by machine learning," he says, “I already felt at the time that the deeper principles needed to understand learning were to be found in statistics, information theory, and control theory, so I didn't label myself as a machine-learning researcher. But I ended up embracing machine learning because there were interesting people in it, and creative work was being done."

In 2003 he and his students developed latent Dirichlet allocation, a probabilistic framework for learning about the topical structure of documents and other data collections in an unsupervised manner, according to the Wiki. The technique lets the computer, not the user, discover patterns and information on its own from documents. The framework is one of the most popular topic modeling methods used to discover hidden themes and classify documents into categories.

Jordan's current projects incorporate ideas from economics in his earlier blending of computer science and statistics. He argues that the goal of learning systems is to make decisions, or to support human decision-making, and decision-makers rarely operate in isolation. They interact with other decision-makers, each of whom might have different needs and values, and the overall interaction needs to be informed by economic principles. Jordan is developing “a research agenda in which agents learn about their preferences from real-world experimentation, where they blend exploration and exploitation as they collect data to learn from, and where market mechanisms can structure the learning process—providing incentives for learners to gather certain kinds of data and make certain kinds of coordinated decisions. The beneficiary of such research will be real-world systems that bring producers and consumers together in learning-based markets that are attentive to social welfare."


In 2019 Jordan wrote “Artificial Intelligence—The Revolution Hasn't Happened Yet," published in the Harvard Data Science Review. He explains in the article that the term AI is misunderstood not only by the public but also by technologists. Back in the 1950s, when the term was coined, he writes, people aspired to build computing machines that possessed human-level intelligence. That aspiration still exists, he says, but what has happened in the intervening decades is something different. Computers have not become intelligent per se, but they have provided capabilities that augment human intelligence, he writes. Moreover, they have excelled at low-level pattern-recognition capabilities that could be performed in principle by humans but at great cost. Machine learning–based systems are able to detect fraud in financial transactions at massive scale, for example, thereby catalyzing electronic commerce. They are essential in the modeling and control of supply chains in manufacturing and health care. They also help insurance agents, doctors, educators, and filmmakers.

Despite such developments being referred to as “AI technology," he writes, the underlying systems do not involve high-level reasoning or thought. The systems do not form the kinds of semantic representations and inferences that humans are capable of. They do not formulate and pursue long-term goals.

“For the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations," he writes. “We will need well-thought-out interactions of humans and computers to solve our most pressing problems. We need to understand that the intelligent behavior of large-scale systems arises as much from the interactions among agents as from the intelligence of individual agents."

Moreover, he emphasizes, human happiness should not be an afterthought when developing technology. “We have a real opportunity to conceive of something historically new: a humancentric engineering discipline," he writes.

Jordan's perspective includes a revitalized discussion of engineering's role in public policy and academic research. He points out that when people talk about social science, it sounds appealing, but the term social engineering sounds unappealing. The same holds true for genome science versus genome engineering.

“I think that we've allowed the term engineering to become diminished in the intellectual sphere," he says. The term science is used instead of engineering when people wish to refer to visionary research. Phrases such as just engineering don't help.

“I think that it's important to recall that for all of the wonderful things science has done for the human species, it really is engineering—civil, electrical, chemical, and other engineering fields—that has most directly and profoundly increased human happiness."


Jordan says he values IEEE particularly for its investment in building mechanisms whereby communities can connect with each other through conferences and other forums.

He also appreciates IEEE's thoughtful publishing policies. Many of his papers are available in the IEEE Xplore Digital Library.

“I think commercial publishing companies have built a business model that is now ineffectual and is actually blocking the flow of information," he says. Through the open-access journalIEEE Access, he says, the organization is “allowing—and helping with—the flow of information."

IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals.

The Conversation (12)
Jeff Hecht
Jeff Hecht28 Dec, 2021

Jordan's observations about AI are very perceptive, and his comment on public perceptions of engineering are also important. Perhaps the public is not as concerned about "science" because it's intended to gain knowledge, but they worry about "engineering" because it means doing something that they may not want done.

William Adams
William Adams26 Oct, 2021

Calling stuff AI is a marketing ploy that will never stop as long as it can fool people into believing what they are trying to sell.

Artificial Intelligence is Genuine Stupidity, we need IA - intelligence amplification to help humans.

And ML has no learning involved at all. Again it is all marketing hype to fool people into thinking it is better than it is. It should properly be called training.

The problem is that it can only do what it is trained to do including all the implicit errors built into the dirty data used as well as the missing data that was not used.

As Heinlein said: Training is for Seals.

ASSuming that correlation is causation ensures so called AI will always fail BIG and worse than the many very small 'successes' it may have which reinforce the false belief in AI.

Garrett Apple
Garrett Apple26 Oct, 2021

A half century ago, I was in (some) AI at Purdue. At that time, it was mostly about pattern recognition and adaptive systems. Today, it seems to still be about pattern recognition and adaptive systems.

1 Reply