The July 2022 issue of IEEE Spectrum is here!

Close bar

2021's Top Stories About AI

Spoiler: A lot of them talked about what's wrong with machine learning today

4 min read
Conceptual illustration showing part of an artificial neural network consisting of spherical nodes connected by silvery lines.
Science Source

2021 was the year in which the wonders of artificial intelligence stopped being a story. Which is not to say that IEEE Spectrum didn't cover AI—we covered the heck out of it. But we all know that deep learning can do wondrous things and that it's being rapidly incorporated into many industries; that's yesterday's news. Many of this year's top articles grappled with the limits of deep learning (today's dominant strand of AI) and spotlighted researchers seeking new paths.

Here are the 10 most popular AI articles that Spectrum published in 2021, ranked by the amount of time people spent reading them. Several came from Spectrum's October 2021 special issue on AI, The Great AI Reckoning.

1. Deep Learning's Diminishing Returns: MIT's Neil Thompson and several of his collaborators captured the top spot with a thoughtful feature article about the computational and energy costs of training deep-learning systems. They analyzed the improvements of image classifiers and found that "to halve the error rate, you can expect to need more than 500 times the computational resources." They wrote: "Faced with skyrocketing costs, researchers will either have to come up with more efficient ways to solve these problems, or they will abandon working on these problems and progress will languish." Their article isn't a total downer, though. They ended with some promising ideas for the way forward.

2. 15 Graphs You Need to See to Understand AI in 2021: Every year, The AI Index drops a massive load of data into the conversation about AI. In 2021, the Index's diligent curators presented a global perspective on academia and industry, taking care to highlight issues with diversity in the AI workforce and ethical challenges of AI applications. I, your humble AI editor, then curated that massive amount of curated data, boiling 222 pages of report down into 15 graphs covering jobs, investments, and more. You're welcome.

3. How DeepMind Is Reinventing the Robot: DeepMind, the London-based Alphabet subsidiary, has been behind some of the most impressive feats of AI in recent years, including breakthrough work on protein folding and the AlphaGo system that beat a grandmaster at the ancient game of Go. So when DeepMind's head of robotics Raia Hadsell says she's tackling the long-standing AI problem of catastrophic forgetting in an attempt to build multitalented and adaptable robots, people pay attention.

4. The Turbulent Past and Uncertain Future of Artificial Intelligence: This feature article served as the introduction to Spectrum's special report on AI, telling the story of the field from 1956 to present day while also cueing up the other articles in the special issue. If you want to understand how we got here, this is the article for you. It pays special attention to past feuds between the symbolists who bet on expert systems and the connectionists who invented neural networks, and looks forward to the possibilities of hybrid neuro-symbolic systems.

5. Andrew Ng X-Rays the AI Hype: This short article relayed an anecdote from a Zoom Q&A session with AI pioneer Andrew Ng, who was deeply involved in early AI efforts at Google Brain and Baidu and now leads a company called Landing AI. Ng spoke about an AI system developed at Stanford University that could spot pneumonia in chest X-rays, even outperforming radiologists. But there was a twist to the story.

6. OpenAI's GPT-3 Speaks! (Kindly Disregard Toxic Language): When the San Francisco–based AI lab OpenAI unveiled the language-generating system GPT-3 in 2020, the first reaction of the AI community was awe. GPT-3 could generate fluid and coherent text on any topic and in any style when given the smallest of prompts. But it has a dark side. Trained on text from the internet, it learned the human biases that are all too prevalent in certain portions of the online world, and therefore has an awful habit of unexpectedly spewing out toxic language. Your humble AI editor (again, that's me) got very interested in the companies that are rushing to integrate GPT-3 into their products, hoping to use it for such applications as customer support, online tutoring, mental health counseling, and more. I wanted to know: If you're going to employ an AI troll, how do you prevent it from insulting and alienating your customers?

7. Fast, Efficient Neural Networks Copy Dragonfly Brains: What do dragonfly brains have to do with missile defense? Ask Frances Chance of Sandia National Laboratories, who studies how dragonflies efficiently use their roughly 1 million neurons to hunt and capture aerial prey with extraordinary precision. Her work is an interesting contrast to research labs building neural networks of ever-increasing size and complexity (recall #1 on this list). She writes: "By harnessing the speed, simplicity, and efficiency of the dragonfly nervous system, we aim to design computers that perform these functions faster and at a fraction of the power that conventional systems consume."

8. Deep Learning Isn't Deep Enough Unless It Copies From the Brain: In a former life, Jeff Hawkins invented the PalmPilot and ushered in the smartphone era. These days, at the machine intelligence company Numenta, he's investigating the basis of intelligence in the human brain and hoping to usher in a new era of artificial general intelligence. This Q&A with Hawkins covers some of his most controversial ideas, including his conviction that superintelligent AI doesn't pose an existential threat to humanity and his contention that consciousness isn't really such a hard problem.

9. The Algorithms That Make Instacart Roll: It's always fun for Spectrum readers to get an insider's look at the tech companies that enable our lives. Engineers Sharath Rao and Lily Zhang of Instacart, the grocery shopping and delivery company, explain that the company's AI infrastructure has to predict the availability of "the products in nearly 40,000 grocery stores—billions of different data points," while also suggesting replacements, predicting how many shoppers will be available to work, and efficiently grouping orders and delivery routes.

10. 7 Revealing Ways AIs Fail: Everyone loves a list, right? After all, here we are together at item #10 on this list. Spectrum contributor Charles Choi pulled together this entertaining list of failures and explained what they reveal about the weaknesses of today's AI. The cartoons of robots getting themselves into trouble are a nice bonus.

So there you have it. Keep reading IEEE Spectrum to see what happens next. Will 2022 be the year in which researchers figure out solutions to some of the knotty problems we covered in the year that's now ending? Will they solve algorithmic bias, put an end to catastrophic forgetting, and find ways to improve performance without busting the planet's energy budget? Probably not all at once...but let's find out together.

The Conversation (1)
Mickey Cee07 Jan, 2022

The common weakness of AI as it stands today, is that it requires commercial investment… and those investors want some positive return.

If we could accept Altruistic AI, or SI as I call it, we would have functioning self-aware intelligent systems within a decade or so.

Of course, these can be abused for commercial or political ends, and therein lies the problem.

Ethical engineering can’t be achieved until we have an ethical world to operate in.

We can only hope that comes sooner than later.

The First Million-Transistor Chip: the Engineers’ Story

Intel’s i860 RISC chip was a graphics powerhouse

21 min read
Twenty people crowd into a cubicle, the man in the center seated holding a silicon wafer full of chips

Intel's million-transistor chip development team

In San Francisco on Feb. 27, 1989, Intel Corp., Santa Clara, Calif., startled the world of high technology by presenting the first ever 1-million-transistor microprocessor, which was also the company’s first such chip to use a reduced instruction set.

The number of transistors alone marks a huge leap upward: Intel’s previous microprocessor, the 80386, has only 275,000 of them. But this long-deferred move into the booming market in reduced-instruction-set computing (RISC) was more of a shock, in part because it broke with Intel’s tradition of compatibility with earlier processors—and not least because after three well-guarded years in development the chip came as a complete surprise. Now designated the i860, it entered development in 1986 about the same time as the 80486, the yet-to-be-introduced successor to Intel’s highly regarded 80286 and 80386. The two chips have about the same area and use the same 1-micrometer CMOS technology then under development at the company’s systems production and manufacturing plant in Hillsboro, Ore. But with the i860, then code-named the N10, the company planned a revolution.

Keep Reading ↓Show less