Cameras that scan and render objects in 3D are now a standard feature in many smartphones, drones, robots, and automobiles. Paired with the right software, these cameras are making it possible to sense light levels, movements, and textures in more places, and at a lower cost, than was previously possible.
ams (located on the former grounds of an Austrian castle) produces the tiny lasers and low-power light sensors that many of these camera systems rely on to identify hand gestures or track eye movements in an instant. The company’s technology must produce accurate results for a wide variety of consumer and industrial devices that operate in very different environments.
ams has a team of 1,200 engineers, and as demand has grown, ams has focused its R&D resources and budget on designing components for three types of 3D sensing: structured light, time-of-flight, and active stereo vision.
Structured light cameras project patterns of light onto objects and analyze distortions in the reflection. The reflected light patterns can reveal the proximity, motion, or contours of a subject. Time-of-flight sensors measure the time it takes for light to travel to and from an object as a way to discern its distance. And active stereo vision combines two or more types of near-infrared cameras to triangulate the location and movements of an object.
To capture these measurements, 3D cameras often require their own light source. Often, this light source is a tiny laser embedded into a device near the camera. Many manufacturers use infrared, invisible to the human eye, for this purpose.
ams makes infrared vertical cavity surface emitting lasers (VCSEL) that provide light for 3D cameras. The company’s lasers can produce a steady flood of light, or special patterns to suit the specific needs of camera-makers. One new ams dot projection module called PARSENN can shine 30,000 random dots of infrared light to illuminate a subject. And the company’s newly introduced MERANO-NIR illuminator produces a flood of infrared light that can help a time-of-flight sensor create a 3-dimensional image of a person or another scene.
Markus Rossi, vice president for innovations in optical sensors at ams, recently spoke with IEEE Spectrum about what’s next for the fast-moving world of 3D sensing—and how it could improve the photos you take with your smartphone.
IEEE Spectrum: We’ve seen many applications of 3D sensing to date—do you have any predictions for where this technology is headed next?
Markus Rossi: Facial recognition is today the primary application of 3D sensing, but it’s certainly not the only one. In the future, we see 3D sensing going much further across basically across all of our focus areas.
Right now, we’re most advanced on the front side. For example, 3D sensing is being used to enhance smartphone photography (helping to add lighting filters and effects such as the Bokeh effect) and to create avatars and animojis using someone’s own facial features.
But 3D sensing is also becoming important in the automotive sector, both outside the vehicle as well as inside. For outside, we’re working on LiDAR, which is a solid-state 3D laser scanning system for self-driving cars to detect objects around the vehicle and map their distances.
Inside, we’re working on ways to use 3D sensing to monitor drivers, and to coordinate hand-offs between a driver and their vehicle. This includes developing in-cabin systems that can recognize a driver, determine an occupant’s posture, and detect objects around the car. So, if you’re running a car-sharing program, or you’re in charge of fleet vehicles, for example, this could allow you to clearly identify who’s been behind the wheel.
We also see a growing demand for 3D technology in other end applications such as industrial robotics as well as home and building automation.
Spectrum: What will these advances in 3D sensing mean for smartphone photography?
Rossi: Adding more 3D information to images can allow you to take much better photos, or add lighting effects to photos you’ve taken. As you’re taking a picture, you can also change the focal depth and the angles of different light sources. With 3D sensing, you can even place a new artificial light source in a photo to create shadows on your face.
Thanks to 3D sensing, it’s already possible to change the photos that you take on front-facing (selfie) cameras. This will also be possible on the world-facing side soon.
These capabilities allow you to take photos that are very much closer to the quality of a professional camera. That’s one of the strong use cases for 3D in smartphones, both, for photos and videos.
Spectrum: Several 3D sensing technologies rely on detecting reflected light—how much light does this typically require? Under what conditions do these systems work?
Rossi: In general, detecting reflected light (from dots or flood illumination) at close distance (say, 40 to 60 cm) is not a major challenge for current systems. And these systems are defined to work in a number of different scenarios. The most critical one is outdoors—the intensity of the reflected dots needs to be strong enough that it can be detected when the light sensor is also being flooded with sunlight. This can be achieved with high-sensitivity, near-infrared sensors, optimized bandpass filters, and various other system-level optimizations.
Spectrum: At ams, you provide both the hardware and software required for 3D sensing in mobile phones and other devices. What was the biggest hurdle to overcome to make these systems work together?
Rossi: ams provides some key parts of the hardware, but not all of them. Since 3D sensing systems are very complex, all components and sub-systems need to be part of the system design. Therefore, close collaborations with partners like Qualcomm and Bellus3D (our stereo vision software partner) are crucial for our success.
Spectrum: Do 3D sensing systems require any kind of calibration or testing once they are chosen for a specific product? If so, what does that process look like?
Rossi: Calibration is very important for all three technologies—both factory calibration as well as real-time calibration. ams has developed algorithms and methods for both types.
For example, in order to do triangulation through stereo vision, the cameras need to be positioned at a certain base line (typically in the range of 25 mm). This distance needs to stay constant down to a micron or a fraction of a micron apart. A mechanical shock or a slight variation in temperature could affect their performance. In order to fix that, you really need to understand your optical and hardware design.
Spectrum: What kinds of configurations, customizations, and options do you provide for companies looking to implement 3D sensing?
Rossi: It’s always possible to modify the sensor hardware and software. Depending on the lifetime production volume, ams will either quickly deliver standard, turnkey products or work with a client to develop fully-customized hardware modules.