Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

TetraVue Says Its Lidar Will Dominate the Robocar Business

The startup's technology combines a camera with laser ranging to take 30 snapshots per second

3 min read

A slow-motion compiliation of black-and-white images depicting a tennis player. The compilation was produced by TetraVue's flash lidar.
TetraVue's flash lidar collects 60 million bits of data per second through a series of flashes that last for just 40 nanoseconds each.
Photo: TetraVue

Standard automotive lidars scan objects with moving laser beams, but flash lidar illuminates the entire scene in an instant in order to take a snapshot. This method combines a camera’s high-resolution imaging with a lidar’s range finding, but so far it has been prohibitively expensive.

Now comes TetraVue, of Vista, Calif., with a system that it says can make the lidar’s 2D video feed as high-res as a regular camera’s and at the same time cheap enough to be the preeminent component in the sensor suite of tomorrow's self-driving cars. It uses a pulsed gallium arsenide diode laser as its flash and gauges the distance from each pixel in an array to the object it is imaging. In other words, it turns the 2D image into a 3D one.

Flash lidar has been around for a while, particularly in military applications, where nobody counts the cost. What TetraVue says it brings to the table is a new way of measuring the distance to objects.

“We put an optical encoder between the lens and the image sensor, and it puts a time stamp on photons as they come in, so we can extract range information,” says Hal Zarem, chief executive of TetraVue.

That optical method has the advantage of scalability, which is why TetraVue’s system boasts 2 megapixels. And because the 100-nanosecond-long flashes repeat at a rate of 30 hertz, the lidar provides 60 million bits of data per second. That’s high-definition, full motion video.

“Because you get standard video as well as lidar for each pixel, you don’t have to figure which object the photon came from—it’s inherently fused in the camera,” says Zarem.

No other lidars will be needed, he adds. Translation: Say good-bye to all the other lidar companies you’ve heard about—Velodyne, for example. As for the other sensors, well, radars will survive, as will a few cameras to fill secondary roles such as showing what’s behind the car when you back up.

And because most of the elements are made in mass quantities—the gallium arsenide lasers are used, among other things, for removing hair from the body—they can be bought off the shelf for a trifling price. “In mass production, we think we can get to the cost points the industry requires, on the order of a few hundred dollars,” Zarem says.

Continental, the giant auto supplier, also has a flash lidar, notes Robert Nalesnik, TetraVue’s head of marketing. But he says Continental measures range by detecting not the optical but rather the electronic time of flight. That method requires precise timing, which means each pixel must be backed up by a whole lot of circuitry. That’s why Continental’s system has a pixel count in just the thousands or a few tens of thousands, Nalesnik says. 

He spelled out the difference between TetraVue’s method and the standard one in an email.

“Let's start with the easier one, electrical time of flight,” Nalesnik writes. “Light travels approximately 1 foot (30 centimeters) per nanosecond. You start a timer when you create a flash pulse and measure the time for the light to return to the detector. Dividing the time by two (because the light needs to hit the object and return) and applying the constant speed of light gives the distance to the object, and the distance resolution determines the timing accuracy that’s needed. To resolve distance to 1 foot requires timing accuracy to 1 nanosecond; to resolve distance to inches requires timing accuracy in the picosecond range. This is quite difficult to achieve, and it turns out that this limits the scalability....

“TetraVue takes a radically different approach, and inserts an optical intensity modulator between the lens receiving the light and the CMOS image sensor [an array of photodetectors, each one representing a pixel]. This modulator sweeps between transparent and opaque over the time that it takes for the flash pulse to travel from the farthest to nearest resolvable distance. The intensity received at each pixel is [compared] to a reference intensity, and the resulting ‘compensated intensity’ is directly proportional to the distance to the object represented by that pixel. Thus the distance to every pixel can be computed by a simple ratio of intensities. This approach does not require the precise timing that is necessary when computing time-of-flight in the electrical domain, and is relatively independent of the number of pixels, which makes it significantly more scalable….”

With full-motion, high-definition 3D video, not only can you improve self-driving cars, you can also greatly ease the task of moviemakers during post-production. For instance, if a scene needs more lighting, you can add that in afterward, getting all the proper effects in three dimensions, right down to the shadows the actors cast as they move around.

Movies may in fact constitute TetraVue’s first market. But there are hundreds of millions of cars and trucks that could use a better way to see the world around them, and that’s the market the company is focused on.

The Conversation (0)