Mobile World Congress 2018: You Can’t Teach an AI to Run a Telecom Network—Yet

At Mobile World Congress, telecom companies are only scratching the surface of machine learning

3 min read

Illustration of AI for phone networks.
Illustration: iStockphoto

In a stifling room at Mobile World Congress in Barcelona on Tuesday, Chris Reece discussed what artificial intelligence could do for the telecommunications industry. Reece, a technologist for Award Solutions, explained that AI, which telecos have already leveraged in some situations, could help solve some of communications service providers’ (CSPs) most complicated problems.

CSPs have been slow to adopt artificial intelligence, Reece explained, in part because the initial problems AI was developed to address didn’t really affect them. When he asked the crowd for examples of problems they’d heard of AI solving, one person suggested chess, and another mentioned image recognition. Reece agreed, saying, “I don’t know a lot of teleco operators who really need a computer to tell the difference between a cat and a dog.”

“There’s a lot of opportunity to use AI in the telecom space, and we’re just starting to scratch the surface,” Reece added. He says eventually, AI could handle some of the industry’s biggest problems, like fraud management and network planning.

Reece emphasized that no one’s talking about a general AI, saying, “As much as I want Isaac Asimov’s positronic brain, we’re not there yet.” These AIs would be for narrow applications, training themselves to solve specific problems.

An AI can train itself to better solve the problem it was designed to solve by using machine learning. Reece explained by using a hypothetical AI tasked with distinguishing images of cats from images of dogs. It’s possible to write an algorithm that, in a rudimentary way, can distinguish them. But suppose you wanted a more capable AI? That’s the goal of machine learning.

In machine learning, the AI is provided with the outputs alongside the inputs. For this hypothetical AI, it would be given images of cats and dogs as inputs—or more accurately, the pixel-by-pixel information of each image—in addition to being told, as an output, whether the image is “cat” or “dog.” The AI can then examine what details cat images tend to have in common and the same for images of dogs.

One real life example is when the U.S. Postal Service trained AI to read handwritten zip codes [PDF] for more efficient mail sorting. The USPS collected 70,000 handwritten numbers, converted them to 28 x 28 pixel grayscale images, and fed them to an AI, one by one.

Essentially, Reece says, the USPS was telling the AI, “Here’s this image, it’s the number one, now you figure out why it’s a one.” Over time, the AI determined not only what portions of the image absolutely had to be shaded in for each number, but also which portions absolutely should not be. A zero, the AI learned, absolutely had to have its markings arrayed in a rough circular form, but it just as importantly must not have any markings in the middle.

“There's a lot of opportunity to use AI in the telecom space, and we're just starting to scratch the surface”

Currently, the telecom industry is tackling what Reece calls the low-hanging fruit problems, like automating customer billing. Even then, these are not simple problems, since they still require complex algorithms with countless parameters. Reece said many of the companies at MWC demonstrating machine learning AIs are focused on finding new ways to monetize subscriber data—things like analyzing the data of CSPs to find customers who might be in danger of switching to a new provider.

There are a few companies exhibiting machine-learning AI at MWC. Peter-Service is a St. Petersburg-based company that has developed an AI capable of targeting different demographics of consumers in the real world using digital billboards. A basic example is, if a large number of soccer fans are gathered in one location—say, at a stadium for a big game—nearby billboards will display sports-related advertisements.

Arm, a device architecture company, is exhibiting Project Trillium, a mobile AI project. Arm has developed their own software library, Arm NN, to train AIs. They’ve also designed their own programmable processors based on neural networks for mobile phones, which give devices machine learning capabilities without requiring an Internet connection for additional computational power. One demonstration featured a phone’s built-in camera being able to “separate” an object of interest, like a person’s face, from the background, and keep the object of interest in focus in real time as it moved through a noisy background.

But Reece sees bigger things ahead, like tackling those aforementioned thorny problems of fraud management or network planning. For example, a company that could train an AI to accurately identify and autonomously manage instances of fraud would have a serious advantage. The same thing goes for a company that could train an AI to address router or database failures, or to efficiently plan its network deployments, particularly as the looming promise of 5G will require radical changes to network infrastructure.

No one at MWC is exhibiting an AI of that capability. But with DARPA already tackling complicated problems like spectrum management with machine learning, it’s only a matter of time before companies start to develop their own machine learning AIs. Perhaps at next year’s MWC, someone will be demonstrating one.

The Conversation (0)