The December 2022 issue of IEEE Spectrum is here!

Close bar

New Haptic Tech Promises Feeling and Texture for Touchscreens and Gestures

Whether it's on touchscreen devices or in the air itself, the next generation of computer interaction will be all about touch

3 min read
New Haptic Tech Promises Feeling and Texture for Touchscreens and Gestures
Disney Research

Over the past few years, many of our gadgets have abandoned physical buttons in favor of touchscreens, where controls can be customized for the task at hand. In exchange, we've had to give up the feedback that you get when you put your hands on physical controls. Now, engineers working on new haptic systems are trying to give us the best of both worlds.

Most devices today are still in the stone age of haptics—they can intelligently vibrate to communicate different things to the user—but that's about it. While effective, this basic system is very one dimensional, in that the entire phone vibrates instead of just the key that you're pressing. The next generation of haptics promises to make the tactile experience much more nuanced and useful, both on our devices and in the air above them.

Earlier this week, Disney Research presented a new algorithm that's able to translate 3D information in an image or video directly into tactile sensations on a special haptic display. The display itself stays perfectly smooth (unlike, say, a Tactus keyboard), and instead modulates the friction at your fingertips to trick you into feeling like there's texture under them:

The display creates the illusion of friction using another Disney Research technology from several years ago called TeslaTouch, which uses oscillating electric charges to dynamically adjust the friction between your finger and the touch panel. Here's how it works:

(Simulating friction by changing voltage seems a bit more practical than Microsoft's approach of mounting the entire display on a robotic arm.)

The combination of the TeslaTouch hardware and this new algorithm "will make it possible to render rich tactile information over visual content," leading to "new applications for tactile displays," according to Ivan Poupyrev, director of Disney Research Pittsburgh's Interaction Group. That's exciting, although we're still waiting for tactile displays with any applications at all to become part of our standard tech arsenal.

But what about haptic feedback that goes beyond a screen? Also announced this week was a new type of haptic feedback that does away with tactile displays entirely, and brings touch interaction into the air. It's called UltraHaptics, from the University of Bristol's Interaction and Graphics research group, and it's like nothing you've ever felt before:

To make this work, a transducer array projects carefully calculated waves of ultrasonic sound into the air, which you can't see, hear, or feel. At certain points, however, the waves come into focus and intensify substantially, displacing the air at those points and creating a pressure difference that you can feel. The system can create multiple pressure points in different locations at the same time, and can even endow individual points with distinct tactile properties. A similar technique has been used by AIST to create true 3D displays, using focused lasers to plasmify the air itself.

UltraHaptics is potentially quite relevant at the moment considering how many mid-air gesture-based computer interaction technologies are becoming available to consumers. The most obvious one might be Microsoft's Kinect sensor, but we've also got things like Edge3 and Leap Motion showing up in peripherals and laptops from HP and Asus, among others. And soon, all you'll need is a webcam. Apple is even rumored to be working on 3D gesture control for iPads, and it makes a lot of sense for cellphones as well, where touchscreen real estate is limited.

If the UltraHaptics system can somehow shrink that transducer array into a form factor that can fit either on a desk or (in our fantasy world) inside a phone, it could lead to all sorts of fantastic new applications: we're picturing a full-size, visible, tactile laser plasma keyboard that projects itself out of your phone and into mid-air whenever you need it. It sounds crazy, but all of the technology exists right now, it just needs to get small enough (and cheap enough) to make it into our hands.

The Conversation (0)

Deep Learning Could Bring the Concert Experience Home

The century-old quest for truly realistic sound production is finally paying off

12 min read
Vertical
Image containing multiple aspects such as instruments and left and right open hands.
Stuart Bradford
Blue

Now that recorded sound has become ubiquitous, we hardly think about it. From our smartphones, smart speakers, TVs, radios, disc players, and car sound systems, it’s an enduring and enjoyable presence in our lives. In 2017, a survey by the polling firm Nielsen suggested that some 90 percent of the U.S. population listens to music regularly and that, on average, they do so 32 hours per week.

Behind this free-flowing pleasure are enormous industries applying technology to the long-standing goal of reproducing sound with the greatest possible realism. From Edison’s phonograph and the horn speakers of the 1880s, successive generations of engineers in pursuit of this ideal invented and exploited countless technologies: triode vacuum tubes, dynamic loudspeakers, magnetic phonograph cartridges, solid-state amplifier circuits in scores of different topologies, electrostatic speakers, optical discs, stereo, and surround sound. And over the past five decades, digital technologies, like audio compression and streaming, have transformed the music industry.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}