Bringing Eyes to the Internet of Things

Seeing isn’t just about taking pictures. The real revolution will come when our digital devices understand what’s in front of their eyes

2 min read
An abstract image of computer vision on the Internet of Things
Illustration: iStockphoto

The ability to create powerful images is in the hands of everybody on the planet. That was the word from Jem Davies, VP of Technology for ARM’s imaging and vision group, to an audience of 4500 engineers and executives who work with embedded technologies. They had gathered last week at ARM TechCon in Santa Clara, Calif.

“The technology we have helped create has changed the behavior of the people of the world,” Davies said.

But, he believes, the true revolution—when all the digital devices that surround us can understand what they see—is still to come.

“We can capture and display great images,” Davies says. “The next leap in computing will be in how we interpret images. That will be revolutionary.”

Bringing image understanding into digital devices will solve one of the huge problems engineers have been wrestling with for some time now: the data deluge—that is, how to transmit, store, and analyze all the photos and videos being recorded by people and things. (These days, some 60 hours of video are uploaded to YouTube every minute.)

“Humans can extract information from pictures quite easily,” Davies points out. “It’s not so easy for computers.” But, if devices can interpret images to extract meaning from them automatically, they won’t have to send everything they capture to the cloud, and the data deluge will not be such a concern.

“The ultimate applications are going to be huge,” Davies says. “Pokémon Go was really simple, but imagine that done properly. Consider an AR system with virtual presence around a conference table, or security and surveillance with video analytics. That’s not all Big Brother—it can find spaces in parking lots, detect overcrowding in the Metro, detect people falling on the floor without streaming a video of grandma on the Internet.”

The economics of using cameras as the sensor of choice for digital devices are overwhelming, Davies pointed out, given that some four to five billion digital cameras are already being sold each year as part of mobile phones and other systems.

“Capturing and displaying and interpreting images,” Davies told the developers at ARM TechCon, “will be at the heart of the devices you build, whether they are personal computing devices or intelligent autonomous machines.”

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}