New displays render images you can almost reach out and touch
A spaceship commander asks IEEE for a status update from his pilot as they gaze at a transparent, dome-shaped navigational display. In it they see a three-dimensional model of the ship and its orientation in space as it speeds toward the mysterious planet Altair IV and a fateful rendezvous with the demented Dr. Morbius.
This scene, from the 1956 film Forbidden Planet, features a primitive wood, metal, and plastic incarnation of what has become a cinematic sci-fi archetype--3-D displays that let you see things much as you do in the real world, only in miniature, typically. The hyperrealistic 3-D display, whether it appears as the holodeck recreational environment in "Star Trek" or as the flickering holographic SOS sent by Princess Leia via R2D2 in Star Wars, underscores not only some sort of fundamental human longing but also a common assumption about high-end 3-D displays: they're way out in the future.
They're not. For decades we've had less-than-scintillating experiences watching monster movies through flimsy red-and-blue glasses and playing video games in wraparound headache-inducing goggles. Now, volumetric displays are finally here--displays that render images in a 3-D space rather than on a flat screen. But unless you're in the military, wrestle with high-end 3-D scientific visualizations, or are given to spending US $40 000 on impressive high-tech gadgets, you've probably never seen one.
A few small companies are just now emerging to try to carve out a piece of a market for volumetric displays that could be worth $1 billion by 2006, according to a study commissioned by my company, LightSpace Technologies Inc., Norwalk, Conn. These companies are pursuing two main technological approaches to displaying solid images electronically.
One is known as swept volume; it uses a high-definition projector or an array of lasers to bounce images off a screen that rotates so fast that the human eye perceives only a 3-D image floating in space. Among those pursuing the swept-volume approach are Felix 3D-Display, in Stade, Germany; Genex Technologies Inc., in Kensington, Md.; and Actuality Systems, in Burlington, Mass. (whose hemispherical displays bear an uncanny similarity to Forbidden Planet's navigation dome). The other approach, taken by LightSpace, is an all solid-state design that uses a projector behind a stack of 20 liquid-crystal screens to create one solid image from a rapidly projected series of images.
All of these systems create 3-D images that require no special eyewear, produce no eye fatigue or headaches, and are visible over a wide field of view from several meters away by many people. The first buyers are expected to put them to use in scientific, engineering, medical, and security chores, but eventually they are likely to wind up in classrooms and living rooms (adding a whole new dimension, literally, to electronic games). But for now, manufacturers are focusing on technical applications that can justify the machines' initially high prices. That means volumetric displays will first be used to help people engaged in high-stakes endeavors--a doctor guiding a catheter inside a beating heart, a geologist developing plans to extract oil from deep underground reservoirs, or a baggage screener looking for knives and bombs in carry-on luggage.
If all goes well, economies of scale could bring prices down to a point where all sorts of intriguing applications become possible. Real estate agents could give people realistic walk-throughs of properties anywhere on the planet. Fashion designers tweaking the lines of new evening gowns could see how variations hang on virtual models. Serious gamers armed with souped-up 3-D graphics cards could boost cars and splatter zombies in addictively absorbing environments that would make 3-D games played on 2-D monitors seem like creaky old cartoons.
Developing displays that give users an intuitive , almost visceral experience of 3-D data is a tall order made all the more difficult by the complex brain functions such displays must trigger. To understand the challenge, you've got to first understand how we see the world. We perceive three dimensions because our brain combines the slightly different images seen by each of our eyes. A subtle interplay of optical illusions, eye-muscle tension, image focus and overlap, and head motion augment those two images with information our brains use to create the perception of 3-D.
Missing from all 2-D displays are the physical cues that guide our brains in processing a 3-D scene. Just take a look around you. That small difference between the images seen by each of our eyes is called binocular disparity. It forces our eyes to perform two other actions that are crucial to seeing in 3-D: our eyes must both converge, or point toward a common viewing location where the images from both eyes overlap, and focus at that depth. Also, movement of the viewer's head, allowing him or her to see previously obstructed parts of the 3-D image--gives the brain vital data for the 3-D image it constructs. That movement-engendered depth sense is called motion parallax.
Conventional stereoscopic 3-D display technologies, like the red-and-green glasses that brought depth to such movie classics as 1954's Creature from the Black Lagoon, provide two images to the viewer, a slightly different one for each eye. The brain resolves these into a 3-D image, but it is necessarily a yellowish monochrome one. More recently, we've advanced to goggles that use liquid-crystal shutters or light polarization to direct different images to the right and the left eye. They're used for some technical visualizations, games, amusement park rides, and in those stereo glasses for movies. But even the best goggles are hard on your eyes and difficult for most people to use for more than a few minutes.
Autostereoscopic displays dispense with the glasses, instead requiring users to position themselves precisely in front of the display. Most of these kinds of displays use special filters placed over the screen's pixels or, in the case of some 3-D liquid-crystal displays, inserted between the backlight and the screen to direct different images to each eye. Commercial systems can produce a reasonably convincing full-color image, but most people can stare at them for only a few minutes before eyestrain ensues, or they shift in their seats and lose sight of the 3-D image. Other displays made by companies such as Dimension Technologies Inc., in Rochester, N.Y., have up to nine different viewing perspectives. But the additional views come at the expense of resolution: the total LCD pixel count is divided by the number of views, resulting in low-resolution images unsuitable for computer-aided-design applications or medical visualization.
Glasses-based stereoscopic and glassless autostereoscopic displays cause physical discomfort, because they force our eyes into unnatural contortions to resolve the image. A viewer's eyes must remain focused at the depth of the display but must converge, or point, to depths either in front of or behind the display to cause the images from the two eyes to overlap. This mismatch in focus and convergence strains the eyes, resulting in significant visual fatigue, headaches, and even nausea in a majority of viewers.
Yet another kind of 3-D display, the hologram, allows viewers to see 3-D images comfortably. It has the advantage of not requiring the brain to combine 2-D images into 3-D, but it isn't electronic. Most holograms are fixed in film, so they can't be manipulated, rendering them useless for interactive technical purposes, at least for now [see sidebar, Merging 3-D Imaging and Holography].
Volumetric displays share holography's ability to create 3-D images that are easy on the eyes and less taxing on the brain than conventional 3-D displays. Their images consist of a set of voxels--volumetric pixels--distributed throughout an enclosed 3-D volume, a space that could look like anything from a half-meter-diameter crystal ball to an unusually blocky monitor. Because voxels appear at different physical depths inside the volume, our eyes converge and focus on them just as they would on any solid object.
What is unique about the new generation of volumetric displays is that they work with conventional 3-D graphics programs, so scientists and engineers can easily and intuitively manipulate images of such things as drug molecules, oil fields, and satellite orbits. They also send virtually all of the requisite physical cues to your brain, which is especially important when you try to visualize the much more complex and much less familiar images associated with high-tech applications.
Such a natural 3-D experience has been a long time in the making. Way back in the 1960s, experimenters came up with a display based on a device called a varifocal mirror. This display used a reflective Mylar membrane stretched over a loudspeaker that emitted a low-frequency growl in sync with an image coming from a cathode-ray-tube monitor. As the speaker caused the Mylar to bow out or suck in, the reflected image of the CRT gave the illusion of an image moving toward or away from the viewer.
In the 1990s, researchers made crude 3-D images by exciting atoms inside light-reflecting vapors and solid glass cubes with lasers and using mirrors to create monochromatic, skeletal outlines of objects such as spheres. These laboratory curiosities never made it to market because they could not be manufactured reliably, nor could they produce full-color images or work with existing graphics hardware.
The swept-volume displays developed by Actuality, Felix 3D, and Genex use more commercially viable approaches. Each has two main parts: a projector and a projection surface that is mounted on an axis and driven by a motor to spin it at a high rate. Recent advances in projection technologies, including cheaper, brighter laser diodes and, in particular, Digital Light Processing (DLP) technology from Texas Instruments, in Dallas, have made the swept-volume display an effective piece of machinery for high-end visualization.
The projection surface can be a helix-shaped, white piece of acrylic, as in Felix 3D's and Genex's systems, or a remarkably thin, translucent piece of plastic, as in Actuality's Perspecta display. In Actuality's system, this round piece of plastic bisects a glass dome and is mounted on a platform that is driven by a motor to rotate it at a very high rate. In each case, the projection surface scatters light beams projected from below so that the voxels appear to emanate from particular points inside the dome. The projection screen spins fast enough to render it invisible in a darkened room; all that appears is the projected 3-D image [see diagram, "Swept Away"].
There are two projector technologies currently in use for 3-D displays. The Felix 3D and Genex systems use three lasers: one red, one green, and one blue. The Perspecta uses TI's DLP technology. At the heart of the DLP is a digital micromirror device, a chip hosting an array of a million or so microelectromechanical mirrors. In the three-chip version of the DLP, white light from an arc lamp shines into a four-sided prism, which splits it into red, green, and blue beams and directs each to a dedicated digital micromirror device on one of the prism's faces. The red, green, and blue colors are recombined and sent through the open fourth face of the prism to the projection lens, and then to a mirror, which bounces the pixels up to the projection screen, where the full-color image appears.
A lot of what goes into a good 3-D display is clever use of psychological cues, which essentially trick our minds into seeing flat images as 3-D. For example, imagine that you're gazing into a Perspecta display to view a human brain, which appears to be floating in space. The image is composed of more than 200 different images that are being projected sequentially by the three-chip DLP onto the plastic screen so fast that your eyes literally deceive you through an effect called persistence of vision.
Persistence of vision is another tool in the display engineer's toolbox. The eye holds an image for an instant after the stimulus that produced it disappears. This same phenomenon, which helps us perceive motion in film, as opposed to separate frames flickering by, aids our brains in constructing a single 3-D image from hundreds of discrete images being projected onto a rapidly spinning screen.
These individual images are often likened to slices of an apple arranged around its core. In the Perspecta, the projector acts as a kind of strobe light, illuminating the screen for about 100 microseconds for each image slice, to produce volumetric images measuring up to 25 centimeters in diameter. Because the projection surface is translucent, the image projected onto it is visible on both sides; the full image is produced by a half-revolution of the screen. By spinning the screen at 15 revolutions per second and projecting a different image onto it more than 200 times per revolution, the Perspecta in effect shows 6400 frames per second, more than enough to fool brains and eyes accustomed to watching movies at a mere 24 frames per second.
DLP-based displays have one big advantage over laser-based displays: bandwidth. The three-chip DLP can process about 2.75 gigabytes of image data per second--enough to render complex, full-color, lit, and shaded 3-D images. Laser-based swept-volume displays, which use a single beam scanning the projection surface and process only half a megabyte of image data per second, can at best produce a simple, monochromatic 3-D line drawing. Such an image contains only about 10 000 voxels, compared with the Perspecta's 100 million voxels.
If you've never seen a swept-volume display, you may have a hard time understanding the wonder it can inspire. Suffice it to say, oohs and aahs are pretty commonly heard as first-time viewers stroll around a diaphanous image that just seems to float in space. But there are some significant limitations to the technology. First, any swept-volume machine contains a rapidly spinning component that needs to be carefully balanced to control vibrations. At the very least, vibrations can cause the voxels to smear through the volume, blurring the image. At worst, the entire mechanism could violently pull apart, thanks to what is known as gyroscopic precession, where energy from an applied force, say, a wave hitting a ship, is transmitted at 90 degrees to the direction of rotation.
So using a swept-volume display on a ship, plane, motor vehicle, or intergalactic star cruiser would pose some challenges. In addition, to maintain a smooth 3-D image over the 360-degree field of view, the video projector must produce numerous image slices at a very high frame rate, which is a big problem for laser-based displays and can even challenge the output capacity of the DLP used in the Perspecta. Consequently, swept-volume displays can display at most hundreds of colors and therefore cannot create lit, shaded, and texture-mapped 3-D images.
Like the Perspecta , the LightSpace DepthCube uses a three-chip DLP. But instead of a single projection screen, its casing houses a stack of 20 liquid-crystal projection screens, each about 5 millimeters from the next [see diagram, "Peering into the DepthCube"].
Each screen sandwiches liquid crystals between two panes of glass treated with an antireflective coating. When a voltage is applied to the screen, the liquid crystals line up in the same direction as the light that is being projected onto it, and light passes directly through the now-transparent screen. When the voltage is taken off the panel, the liquid crystals relax into random orientations. In that state, the liquid crystals scatter light shone onto them, creating a voxel that looks as if it emanated from that surface location, and not the DLP projector at the back of the display.
At any given moment, 19 of the screens are transparent, and only one is in a white-scattering state. However, relying on persistence of vision, we sequence the image back and forth across the 20-screen stack. Because the screens at the back of the monitor are physically farther away from you than the screens at the front, your eyes converge and focus naturally on voxels wherever they appear.
With the projector sending out 1200 image slices per second, you'd think that the composite 3-D image, consisting of 20 two-dimensional images, would appear chopped up from one screen to the next. But by taking advantage of another psychological component of 3-D perception, people can be tricked, in effect, into seeing planes between the physical screens, virtually eliminating jitter and jagged edges.
We call this technique depth anti-aliasing, and it works like this: if a viewer is shown two 50 percent bright voxels at the same x, y location on two adjacent DepthCube planes, the viewer will perceive a single voxel at 100 percent brightness halfway between the two planes. In most cases the viewer is completely unable to see the individual image slices. Depth anti-aliasing effectively converts the DepthCube's 15.3 million physical voxels (1024 by 748 by 20) into more than 465 million perceived voxels (1024 by 748 by 608).
As in other volumetric displays, 3-D images within the DepthCube are visible over a wide and continuous field of view and have all of the depth cues of real 3-D objects, except for opacity. Light cannot block other light, and since these 3-D images are made of light, one image cannot block, or occlude, another. In other words, 3-D images are translucent, not opaque. Still, when looking at a DepthCube display you see a different image depending on your perspective--the images possess both vertical and horizontal motion parallax, which lets the viewer see around translucent objects in the foreground to reveal previously obstructed objects.
Unlike swept-volume displays, the DepthCube is entirely solid-state, so it is not affected by vibration. And, because the DepthCube is intended for front viewing rather than 360-degree viewing, its DLP projector can operate at a more modest 1200 frames per second and still provide 15-bit color (or 32 768 mixed colors), which is sufficient for rendering the optical effects of lighting, shading, and texture mapping that make 3-D images really pop.
Another advantage of this approach is its relative compatibility with existing 3-D graphics software. Because DepthCube images are projected onto 2-D planes, they have a Cartesian geometry, making the DepthCube compatible both with software that uses the OpenGL graphics language, common in technical 3-D modeling software, and with standard 3-D graphics cards.
Modern 3-D graphics cards are not completely 3-D. They translate, rotate, and scale the geometry for 3-D images using x, y, and z (or depth) information but ultimately produce 2-D images. So, for instance, when a bird flies in front of a tree, the pixels corresponding to the leaves, twigs, and branches disappear, replaced by the pixels corresponding to the image of the bird. The card accomplishes this feat by storing in one frame-buffer memory the location on the x-y plane where the pixel with its particular color will appear, and storing the z-axis information in a second frame-buffer memory. Using proprietary software we call the GLInterceptor, this depth information can be extracted in real time to create DepthCube images from nearly any OpenGL application.
After a 3-D application has filled the graphics card's frame-buffer memories with its 3-D image, our software extracts the color position and depth information for each pixel--its z location--and passes it onto the DepthCube's 3-D frame buffer, which holds image data for all 20 screens. The color and x-y location are sent to whichever of the 20 screens corresponds to the correct depth, or z location, creating a voxel. This is the key to showing different-colored voxels at different depths at the same x-y locations, to form, say, a multicolored ribbon [see photo, "That's Deep"].
We are now fine-tuning the DepthCube architecture for different markets. For example, there is a modest market for very expensive 3-D displays with diagonals of more than 50 inches. These displays, costing more than $100 000, are useful in air-traffic-control rooms and for collaboration among engineers and scientists in the automotive, aerospace, and oil and gas industries. However, the largest market for volumetric 3-D displays is for single-user desktop displays that cost less than $5000 and can be used continuously for any 3-D visualization you can imagine. We hope to demonstrate such a display later this year.
Success with less-expensive displays for technical applications will open the door to institutional and consumer markets. High school biology students could forgo the queasy trial of hands-on animal dissections by simply gathering around the classroom volumetric display to view the innards of any creature in the lesson plan. Home shopping, or even dating, via the Web might be less fraught with uncertainty. Is that a crack in the vase I want to buy on eBay or merely a scratch in the paint? Are those hair plugs or is that mane really all his? Amateur cosmologists could probe the deepest reaches of space, orbit stars, and pass through nebulae, or survey far-off worlds. Even forbidden ones.
About the Author
Alan Sullivan is president and CEO of LightSpace Technologies Inc., Norwalk, Conn. He was formerly the CTO of Vizta3D/Dimensional Media Associates, where he began the development of the DepthCube technology.