Machine Learning for Easier Dieting

People could stick to diets better if they could just tell an app what they ate using natural language. We’re getting there

3 min read
Machine Learning for Easier Dieting
Illustration: Jose-Luis Olivares/MIT

I had a half-cup of oatmeal, with two-tablesoons of maple syrup and a cup of coffee. Oh, I put a handful of blueberries in the oatmeal, and there was milk in the coffee. It was skim milk.

Ask someone what they had for breakfast, and this is the kind of description you might get. And that’s one of the reasons keeping track of food intake is such a problem for tech that’s meant to help a person lose weight or stick to a diet for other reasons.

Logging food for nutrition and calories is important to sticking to a diet, according to Susan Roberts, director of the Boston-based Energy Metabolism Lab at Tufts University. “It makes people more self-aware about the junk they are eating and how little they actually enjoy it, and the shock of huge portions, et cetera. But currently, it is really tedious to log your food,” she said in a press release. “A spoken-language system that you can use with your phone would allow people to log food wherever they are eating it, with less work.”

Roberts approached the Spoken Language Systems Group at MIT for a fix, and a team of engineers presented their prototype solution last week at the IEEE International Conference on Acoustics, Speech, and Signal Processing, in Shanghai. The system can take a person’s natural speech about a meal, understand it, match it to items in a U.S. Department of Agriculture database, and retrieve the right nutrition information.

There were two main challenges, according to James Glass, the MIT senior research scientist whose student, Mandy Korpusik, presented the results in Shanghai. The first is to understand what a person is saying. That means a machine needs to know that in “bowl of oatmeal,” bowl is a quantity referring to the food oatmeal. But in “oatmeal cookie,” oatmeal is describing the food, which is the cookie.

Glass’s team used a type of machine learning called conditional random fields. It’s a form of pattern recognition that’s particularly well suited to sequences of things such as gene sequences and spoken language, because it takes the context of a sample into account.

But the system still has to be taught what kinds of things to look for. And that meant human involvement—specifically a bunch of workers they found through Amazon’s Mechanical Turk. The workers described their meals and then labeled the parts of the description.

This data was used to train the system to understand meal-logging speech. Glass’s group then went back to a few of the Mechanical Turks for the task’s second big challenge: matching the meal labels to the USDA nutrition database.

It’s harder than it sounds. “Something like oatmeal might not even be in it,” says Glass, but there’s an entry for oats. So the system had to be trained to match labels from spoken words to where they best fit in the database’s language.

If it all seems very labor intensive, that’s because it was. But that’s the current state of the art. “Speech recognition doesn’t work the way humans work,” says Glass. “That’s the direction it needs to go into, but it’s not there yet.”

Glass stresses that the system is just a research prototype that would need improvement and real-world testing before it could be turned into a useable app.

Logging by voice might not be the only way to go in the future. Engineers at State University of New York at Buffalo and at Northeastern University in China are working on a gadget that tells what you’re eating by the sounds it picks up from your neck when you bite, chew, and swallow.

The Conversation (0)

Are You Ready for Workplace Brain Scanning?

Extracting and using brain data will make workers happier and more productive, backers say

11 min read
A photo collage showing a man wearing a eeg headset while looking at a computer screen.
Nadia Radic

Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

Keep Reading ↓Show less