Your Tablet’s Light Sensor Can Spy On You

A tricky hack demonstrates that even seemingly innocuous components can pose risks

4 min read

An photograph of a mannequin head and hand placed in front of a computer screen that is displaying a checkerboard-like pattern

The researchers tested the ability to read different hand gestures with the light sensor by placing a mannequin head and hand in front of a Samsung Galaxy View2 tablet.

Yang Liu, MIT

Most people are probably aware of the privacy risks posed by the front-facing camera on their mobile device or laptop, and so can take steps to reduce those risks. But now research shows a determined hacker can also capture images of a user in an unexpected way, via the ambient light sensor that helps automatically adjust screen brightness on many devices.

It has become standard practice for apps and websites to request permission when they want to access data from cameras, microphones, or geolocation systems. Yet devices like smartphones, tablets, laptops, and even smart TVs are packed with other sensors whose potential security risks are often overlooked, says Yang Liu, a Ph.D. student at MIT.

Among these, ambient light sensors pose particular concern, he says. These components provide a readout of the illumination in the local environment, which operating systems use to alter screen brightness or toggle dark modes. While that may not seem to provide much detailed information, researchers have shown that these sensors can detect light-intensity changes, which can be used to infer what kind of TV programs someone is watching, what websites they are browsing, or even keypad entries on a touchscreen.

“For the ambient light sensor, people don’t even know that an app is using that data at all. And this sensor is always on.”

Now Liu and colleagues have shown in a paper in Science Advances that by cross-referencing data from the ambient light sensor on a tablet with specially tailored videos displayed on the tablet’s screen, it’s possible to generate images of a user’s hands as they interact with the tablet. While the images are low resolution and currently take an impractical amount of time to capture, this kind of approach could allow a determined attacker to infer how someone is using the touchscreen on their device, Liu says.

“People are aware of selfie cameras on laptops and tablets and sometimes use physical blockers to cover them,” says Liu. “But for the ambient light sensor, people don’t even know that an app is using that data at all. And this sensor is always on.”

Capturing images with a light sensor is no easy feat, though, because the device has just a single pixel that registers light intensity at a given moment. Liu says attackers get around this by essentially sacrificing temporal resolution for spatial resolution. They display a rapid sequence of patterns on the tablet’s display that light up different portions of the scene, resulting in different light-intensity readings at the sensor. A reconstruction algorithm is able to take this sequence of readings and match it up with the known illumination patterns to piece together an image.

Because the setup has no lens, it’s impossible to focus on anything at even a small distance. And not much interesting happens right in front of the light sensor, so a key challenge for the researchers was trying to image what was going on in the region directly above the display screen so as to track a user’s hand movements. So the team used an approach called “dual photography” that relies on some counterintuitive physics and clever computation to turn the tablet’s display into a camera.

Underpinning the technique is a physical principle known as Helmholtz reciprocity. Imagine a ray of light that travels through an optical system, undergoing a series of reflections, refractions, and absorptions. This principle states that if you send a fresh ray of light backward along the same path, those transformations would add up in exactly the same way. In conventional photography, light travels from a source of illumination to a camera. Dual photography exploits Helmholtz reciprocity to computationally reverse the flow of light, essentially swapping the camera and the light source. This makes it possible to take a photograph from the perspective of the light source rather than the camera.

In the scenario tested by Liu and his colleagues, light from the display is partially blocked by the user’s hand and reflects off their face. It is then picked up by the light sensor. From the dual photography perspective, though, the light can be imagined as traveling in the opposite direction from the sensor, with the hand casting a shadow on the display.

The team designed an inversion algorithm able to convert the readings from the light sensor into a 32-by-32-pixel image that captured the region just above the display. To test the approach, they took an off-the-shelf Samsung Galaxy View2 tablet with a 17.3-inch screen and placed a mannequin head and hand in front of it to simulate a person. They demonstrated that they were able to capture images of a variety of touch gestures, such as two-finger scrolling and three-finger pinches. They also showed that they could capture a rough image of the user’s hand using a modified video of the cartoon characters Tom and Jerry, suggesting that illumination patterns could be concealed in videos.

The main limitation of the approach, admits Liu, is the time it takes to capture images. Because the researchers were relying on a single pixel sensor, they had to cycle through a very large number of illumination patterns to build up a clear picture. The relatively slow speed of the light sensor meant that the fastest they were able to resolve hand gestures was 3.3 minutes, while the experiment with the modified Tom and Jerry video took 68 minutes.

“The acquisition time in minutes is too cumbersome to launch simple and general privacy attacks on a mass scale,” says Lukasz Olejnik, an independent security researcher and consultant who has previously highlighted the security risks posed by ambient light sensors. “However, I would not rule out the significance of targeted collections for tailored operations against chosen targets.”

But he also points out that, following his earlier research, the World Wide Web Consortium issued a new standard that limited access to the light sensor API, which has already been adopted by browser vendors.

Liu notes, however, that there are still no blanket restrictions for Android apps. In addition, the researchers discovered that some devices directly log data from the light sensor in a system file that is easily accessible, bypassing the need to go through an API. The team also found that lowering the resolution of the images could bring the acquisition times within practical limits while still maintaining enough detail for basic recognition tasks.

Nonetheless, Liu agrees that the approach is too complicated for widespread attacks. And one saving grace is that it is unlikely to ever work on a smartphone, as the displays are simply too small. But Liu says their results demonstrate how seemingly harmless combinations of components in mobile devices can lead to surprising security risks.

The Conversation (0)