Is totally unobtrusive control of my devices really too much to ask? Apparently, it is, because every device that I own insists that I either poke it, yammer at it, or wave it around to get it to do even the simplest of things. This is mildly annoying when I’m doing just about anything that isn’t lying on the couch, and majorly annoying when I’m doing some specific tasks like washing dishes or riding my bike.
Personally, I think that the ideal control system would be something that I can use when my hands are full, or when there’s a lot of ambient noise, or when I simply want to be unobtrusive about telling my phone what I want it to do, whether that’s muting an incoming call during dinner or in any number of other situations that might be far more serious. Options at this point are limited. Toe control? Tongue control? Let’s go even simpler, and make teeth control of devices a thing.
When we talk about controlling stuff with our teeth, the specific method does not involve replacing teeth with buttons or tiny little joysticks or anything, however cool that might be. Rather, you can think of your teeth as a system for generating gestures that also produce noises at the same time. All you have to do is gently bite in a specific area, and you’ve produced a repeatable sound and motion that can be detected by a combination of microphones and IMUs:
In this work by the Smart Computer Interfaces for Future Interactions (SciFi) Lab at Cornell, researchers developed a prototype of a wearable system called TeethTap that was able to detect and distinguish 13 different teeth-tapping gestures with a real-time classification accuracy of over 90% in a controlled environment. The system uses IMUs just behind the bottom of the ear where the jawline begins, along with contact microphones up against the temporal bone behind the ear. Obviously, the prototype is not what anybody wants to be wearing, but that’s because it’s just a proof of concept, and the general idea is that the electronics ought to be small enough to integrate into a set of headphones, earpiece, or even possibly the frame of a pair of glasses.
Photos: Cornell SciFi Lab
During extended testing, TeethTap managed to work (more or less) while study participants were in the middle of talking with one the researchers, writing on a paper while talking, walking or running around the lab, and even while they were eating or drinking, which is pretty remarkable. The system is tuned so that you’re much more likely to get a false negative over a false positive, and the researchers are already working on optimization strategies to improve accuracy, especially if you’re using the system while moving.
While it’s tempting to look at early-stage work like this and not take it seriously, I honestly hope that something comes of TeethTap. If it could be as well integrated as the researchers are suggesting that it could be, I would be an enthusiastic early adopter.
TeethTap will be presented next week at CHI 2021, and you can read the paper on arXiv here.
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.