Interface Lift

User interfaces get an extreme makeover to cope with today’s torrent of information

13 min read
Image by Viktor Koen
Image: Viktor Koen

img Image: Viktor Koen

You launched your Web browser this morning and typed “driver circuit” into a search engine. You’re looking for design tips for integrating a light-emitting diode onto a circuit board. Instead of the typical text list of sites, a hodgepodge of references to stepper motors, audio loudspeakers, and the Formula One racing season (those “drivers” do follow “circuits”), your screen fills with colored balls, nested like groups of solar systems within solar systems, each labeled with a general term. You ignore the balls labeled “pro racing,” “solenoid drivers,” and “power supplies” and click on the circle labeled “LED drivers,” which brings you to a group of squares that are Web links to sites with information about LED driver circuits. You found what you were looking for in seconds.

In May, San Francisco’s Groxis Inc. rolled out this new type of user interface as part of a Java plug-in for Internet browsers. As a new way of looking at information, it may catch on. Or another, equally unconventional means of interacting with a computer might take over instead. It may look like strands of DNA, or it might look like bubbles, paper tossed on a desk, or a timeline. It may float in a three-dimensional dome. Or it might look like something we can’t even imagine today.

Sources for interface metaphors abound. In the past, we leaned heavily on the world of the office and its folders and desktops, because we thought we were building interfaces mainly for office workers. But today, information work and information workers are everywhere. The FedEx carrier is an information worker, collecting data on packages picked up and delivered and submitting it to FedEx’s giant, very accessible, online database. Retail store clerks may be information workers when they enter new inventory into the store’s database as they stock the shelves. Nurses are information workers when they feed notes previously scrawled on illegible or inaccessible charts directly to digital systems, making the data available to the patient as well as the doctors anywhere, anytime.

This means new metaphors. Many will come from life sciences. Others may come from the health care or other industries, as these become information-dense environments. An interface for a next-generation technology might come from the gaming world, where fast visualization metaphors abound. What is sure is that someday our great-grandchildren will look back and laugh at the unsophisticated ways we accessed and navigated data.

But change has indeed begun, as demonstrated by a recent flurry of product introductions and research announcements.

The interface for documents, folders, and menus that is familiar to computer users today is nearly 30 years old. Invented at Xerox’s Palo Alto Research Center, popularized by Apple Computer, and then embraced by Microsoft, it was a huge breakthrough compared with the command-driven interface that preceded it [see photo, “Star Power”]. It has served the computer world well; its death has long been predicted, but has yet to occur.

However, its desktop look dates back to a time when the typical personal computer had less computing power than today’s cellphone, and it was created before the advent of Web pages, digital cameras, and Apple iPods. Indeed, it turned out to be versatile enough to handle these disparate forms of information, but it was by no means the most efficient way of doing so. For today, instead of a few dozen megabytes of stored data, users have tens of gigabytes or more, and access to unimaginably large data troves over the Internet. A typical user has one set of folders for text documents, another for organizing e-mail, and separate photo-sorting software for images—but what the user really needs is an easy way to connect and navigate the disparate data relating to a project or topic. Today, companies spend hundreds of millions of dollars a year training users on new applications and bailing out others who’ve gotten lost in the intricacy of their systems. Better interfaces could solve these problems or at least lessen their impact—in cost, frustration, and lost productivity—on businesses and users.

The next big thing in user interfaces is unlikely to be a single, all-purpose interface, used for a host of different tasks. Instead, three interface categories are emerging. First is the browser interface, perhaps not the most efficient tool for a particular task, but with it users can move easily from one computer to another. Second is a special-purpose interface for navigating large collections of information such as the Web. And finally, there are a variety of interfaces that computer users are beginning to acquire from both established and new companies for managing their own collections of information.

Designing new user interfaces requires a tradeoff. You can either exploit the newest interface theories—using radically new metaphors, software, or devices—and risk alienating hordes of experienced users, or you can exploit the familiarity of your users with a huge, installed base of existing products. Doing both is difficult.

Microsoft Corp., in Redmond, Wash., faces this dilemma every time it releases a new version of Word, its popular word-processing program. Many Word users have already spent a lot of time with the product, memorizing the locations of tools on the menu bars and the shortcuts to various functions. When Microsoft rearranges the menus to make them more efficient, the company is greeted by howls of protest from those who intuitively click where “Bold” used to be, for example, and find they’ve indented a paragraph instead. Many people simply continue to use their old versions of the software, refusing to be inconvenienced in the short run by a new interface, even though it may ultimately be more efficient.

This interface familiarity also protects products from being usurped by competitors., a free suite of office tools available for Microsoft Windows and Linux, does 75 percent of what Microsoft Office does—more than enough for the typical user. But the interface is organized completely differently; nothing is in the same place. And it turns out that many users would rather continue paying Microsoft for its familiar version than retrain on the free one.

What happens, therefore, isn’t surprising. The owners of software with many entrenched users are reluctant to make big changes, and their interfaces tend to be refined incrementally and retain their familiar look and feel.

Microsoft’s desktop interface will evolve somewhat with its next operating system, Microsoft Windows Vista (formerly Longhorn), and will also be offered on Windows XP, both in 2006. The company is making enhancements, such as using transparency to guide the user through the navigation process. For example, an application or process not used for a while would become more transparent, seeming to fade away, although still visible. This lets the user focus on the task at hand while maintaining instant access to other software and data. Shading and rendering will be improved, and a 3-D look will help guide the user through complex processes.

Sidebars, or panes, that appear in Windows and Office applications will also be used more, being made accessible to other developers for use in their applications. Nevertheless, a Microsoft spokesman assured IEEE Spectrum that the intention is to “continue to provide the Windows user interface look and feel.”

Many smaller companies that build products for the mainstream market are also caught up in this familiarity problem: they want to retain their customers as well as their relationship to mainstream products. So they’re not going to opt for rapid or startling changes, either.

As a result, most new interfaces are not coming from established companies. Rather, they’re sprouting from new ventures, research laboratories, and independent inventors. This limits their immediate impact on the market, but if the design is obviously valuable, there’s a good chance that hearts and minds may eventually be won over.

The browser interface came into being in the 1990s solely as a way of letting users find information on the Internet. It evolved into a general-purpose interface, used as the front end for applications beyond Internet searching. Its characteristics include a standard bar, in which you insert a Web address, and pull-down menus with additional navigational tools; the applications appear within this framework.

Browser interfaces exist for traditional applications, such as word processing and spreadsheets; for specialized applications used within companies, such as inventory tracking; and for repetitive tasks, such as filling out forms. Because nearly all computer users today have Internet experience—and for some users it is their entire computing experience—making an interface browserlike ensures that it seems familiar to Internet users. Familiarity leads to quicker and more intuitive learning. Windows itself now uses a browser-style interface to navigate files. Business applications accessed through a local network rather than stored on individual computers—company manuals, time cards, and the like—often rely on a browser interface.

The big advantage of the browser interface is that most machines already have some kind of browser installed, so you don’t need to have specialized software to run applications stored on a remote server. This is a major appeal of Web-based mail applications. Although they are less efficient than specialized e-mail software that is run locally, they enable a user to check mail from any Internet-connected computer. When you check your e-mail from a friend’s computer, you are most likely using a browser interface to an e-mail application, even if you typically use a dedicated e-mail application from your home computer.

On the other hand, the browser interface’s purpose is fairly simple and straightforward: to navigate the Internet and view information. It’s not really designed to be an interface for running complex software applications like word processors and spreadsheets, and using it as one leads to too many mouse clicks or keystrokes, wasted time, user frustration, and errors. Generally, your active “state,”—that is, what you were doing in an application the last time you were there—can’t be preserved. So you need to tell the computer repeatedly many things you think it already knows—like where you were when you left the application or recent changes that have been made to format or content.

A variety of tricks make the browser interface more useful. If you allow it, Internet applications may store cookies, or bits of information, on your computer to remind the applications of what they already know about you. Or you might use a plug-in device that has details about you and your applications and data, such as a Smart Card or some type of memory-containing key that plugs into your computer’s USB port.

Lately, interface and application designers have been looking into ways of extending the browser interface to provide a richer graphical user interface, or GUI. The first to go public with such a product is IBM Corp., with Workplace, software it sells to companies that lets them run a variety of traditional business applications remotely over the Web, including customer contact managers, project planners, spreadsheets, word processors, and e-mail programs. This kind of interface is stored on a server and downloaded on demand to a user’s desktop. It has application-specific menus for a wide variety of applications that are much more efficient than the general menus of a typical browser.

Navigating large collections of information is difficult and time-consuming. We can create these huge collections easily—from crawling the Web with a search engine to assembling mixed-media collections of documents, databases, audio, and media. But sorting through this information can be a nightmare. A second, more specialized interface addresses this problem.

Let’s assume you have an interest in Procter & Gamble Co. The Web contains an enormous amount of information about this company—product information, case studies, newspaper articles, details of product recalls, videos of commercials, press releases, manufacturing process data, podcasts that mention the company, and blogs by employees and consumers. The number of individual items, as well as the sheer size of the database, is astronomical.

Traditional search engines can be flummoxed when attempting to sort through this data. Say, for instance, you are trying to find the source of a quote you recall reading or hearing about the company. Google will lead you to that quote easily only if it has been frequently referenced. A traditional search might work if you remember at least part of the quote exactly—for example, if you recalled the executive saying the word “terrible.” But if you misremembered the word as “horrible” or “disgusting,” traditional search engines would come up short.

Researchers have long tried to find new metaphors to solve this “needle in a haystack” problem and locate information quickly and easily.

One such product is Star Tree, developed by the Information Interfaces group at Xerox Corp. in Stamford, Conn., and a company it has created to sell its products, Inxight Software Inc., in Sunnyvale, Calif. Star Tree products take vast hierarchies of information and display them in ways that the human eye and mind can readily understand.

Taking a huge collection of information—our “everything on the Internet about P&G” example—Star Tree will arrange it into sets of subtopics, either user-selected or selected automatically: products, customers, external press reports, and so on. These groupings of Web hits are placed into subtopics that look like galaxies [see photo, “Galaxies of Information”]. The user moves the cursor around this universe, clicking on different galaxies or constellations within a particular galaxy.

Whichever area is clicked, Star Tree’s universe reorders itself, making the item clicked upon larger and changing the size of the other items, depending on how relevant they are to the selected item. Eventually, you narrow in on one section of one galaxy, moving finally to individual pieces of information. Star Tree can also present this data in list form, for users who find the galaxy metaphor too exotic.

Also tackling this problem, IBM Corp. has developed an infrastructure called WebFountain, for connecting vast amounts of text in a variety of formats collected from the Web or other sources [see “A Fountain of Knowledge,” IEEE Spectrum, January 2004]. It’s similar to Star Tree in that the software takes huge amounts of information from various sources and automatically categorizes it. WebFountain does not include presentation software—that is, the graphical or text display that the user sees. Rather, it is intended as a foundation for such interfaces.

Factiva, in New York City, a business news and information aggregator service owned by Dow Jones Reuters Co., in Princeton, N.J., is one of the first companies to build upon WebFountain, presenting information to its clients through a multipanel interface of its own design.

The idea of using information bubbles as interfaces emerged in 2002 from two apparently unrelated groups of researchers virtually simultaneously. Groxis and Cloudmark, both in San Francisco, base their data navigation interfaces on the concept that information is really a kind of node, or bubble, and that related bubbles can be nested one inside another. These interfaces also rely on color cues based on the category, importance, or urgency of information to make navigation easier.

Groxis’s version of the bubble interface, Grokker, has been available as stand-alone software for several years and was used only by a small community, before its recent release as a Java plug-in [see photo, “Grokking It”].

Give Grokker a hierarchical database and the software carves it into a series of colored balls, with each ball representing one topic. In the P&G search, these balls include news, market research, and brands. Look at a topic and you can zoom in on its contents, which are represented as smaller and smaller balls. For example, a search for P&G products turns up balls for “new products,” “paper products,” “care products,” and “boycotted products.” Go up in the hierarchy and you can view many balls and their relative relationships. Search for another topic and the information rearranges itself accordingly. Grokker can represent Web searches, corporate databases, or mixed data collections.

Cloudmark’s bubble interface is called Information DNA. This interface grew out of the company’s efforts to improve its e-mail antispam software. Cloudmark’s engineers discovered that they could find patterns in large e-mail collections that are like DNA—that is, certain patterns represent various kinds of spam, such as fraudulent mortgage offers or pornography, that can be consistently identified. To help with the identification, the engineers created mathematical algorithms, which are applied to e-mail to produce a screen filled with bubbles whose sizes and colors indicate collections of safe information and of spam.

The third problem being tackled by new interfaces is organizing the information you create or collect. This includes photos, videos, and audio files along with traditional text documents—a much more complex task than the designers of the documents, folders, and menus interface tackled back in the 1970s.

The first product to address this challenge directly is EverNote, from EverNote Corp., Sunnyvale, Calif. The company made the PC version of the software available for free download in June and promises a Macintosh version soon. EverNote plans to charge a fee for an expanded version that will synchronize data from PDAs and smart phones along with computers.

The idea behind EverNote is simple: it keeps all your computer files—no matter what type or how they were created—in a single, chronological manuscript holding everything [see photo, “Time Traveler”]. You can search this manuscript by keywords, categories, or other designations. Or, probably most useful, you can look at graphical representations of your files on a timeline, based on when each item was obtained or created.

The graphics on the timeline are small versions of each document in its original format. That is, it is not represented by an icon. A document looks like a note or a letter, a handwritten note looks like handwriting, and a Web site looks like the actual Web site in miniature. People tend to remember more or less when they created or obtained a document—not the exact date, perhaps, but often the month or season. Because users also have a visual memory of that document, the graphical timeline enables them to select the right document quickly even though they can’t read the text.

The basic idea behind EverNote is not new: David Gelernter of Yale University, in New Haven, Conn., implemented such a system in Lifestreams in the late 1990s; a version was released for the Apple Newton handheld computer in 1993. Lifestreams was a research project that kept all of an individual’s data—notes, photos, manuscripts, Web-collected information—in a single permanent, chronological file. But Lifestreams converted the documents into a standard format, so they lost their visual uniqueness.

As the information being navigated and collected by computers becomes increasingly complex, it may turn out that two dimensions are not enough in which to represent it.

A number of companies have been working on 3-D interfaces for a long time. Such interfaces allow more flexibility in displaying information, permitting the images that represent information to look more natural, letting them rotate in space or overlap with transparency and dimension as clues to their position in space and size (and, therefore, their relationships or importance).

The first of these interfaces are 3-D representations translated to a 2-D display. A good example of this type of interface is Project Looking Glass, from Sun Microsystems Inc., in Santa Clara, Calif. [see photo, “Through the Looking Glass”]. It permits multiple documents to be viewed simultaneously, but instead of placing them as if they are standing straight up on the screen, as is typical in most interfaces today, Project Looking Glass tips the documents back, as if they are lying on a slanted drawing table. As a result, many more documents are visible at the same time than is possible in a traditional presentation. Project Looking Glass’s display has perspective: documents toward the bottom of the screen are considered foreground and are larger; the user can push documents back, making them appear farther away and smaller but still visible. Documents can be “turned over,” and the user can add notes on the back. This interface has been placed under an open-source license by Sun, and a development community is now gathering around it.

True 3-D interfaces are still very expensive, but they do exist. One example is the Perspecta Spatial 3D System, from Actuality Systems Inc., in Bedford, Mass. The system includes a 51-centimeter dome, which displays full-color and full-motion images that occupy a volume in space. This means the user can look at a 3-D image in 3-D space without special glasses and can interact naturally with the image in real time.

People who use special applications for medical research, radiation oncology, and petroleum exploration are the only ones who currently benefit from such true 3-D interfaces, but these devices may eventually migrate for use by ordinary computer users. Ravin Balakrishnan, a professor of computer science at the University of Toronto, is working on appropriate interfaces for such a world, perhaps based on the mapping of users’ hand gestures and their interaction with virtual 3-D volumes.

In the future, user interfaces may go beyond the visual to the tactile. SensAble Technologies Inc., in Woburn, Mass., offers a device that looks like a pen attached to a robotic arm. It allows users to touch and manipulate virtual objects with varying degrees of precision, suiting the application and the user’s budget. The idea is for the user to experience the object exactly as if he or she were touching it in the physical world. Force feedback via the haptic interface allows the user to explore virtual objects as if they were physically present and being explored by actual touch. SensAble has research projects with a number of commercial and university laboratories.

Although SensAble’s interface is designed for high-end business users, haptic interfaces will quickly enter the consumer realm. In February, at Demo@15, an industry conference in Scottsdale, Ariz., Novint Technologies Inc., of Albuquerque, N.M., showed Falcon, a consumer haptic interface. Novint expects its interface to be used with games soon, at prices as low as US $99.

Such revolutionary new interfaces are steadily moving into users’ hands. Some will catch on; most will fade away.

But none of the more exotic ideas discussed are mainstream yet: they are all experiments. The prize may go to the style of interface that gets adopted by a mainstream vendor, like Microsoft, Apple, or Google, and then proliferated to everyone’s device. Or a tiny vendor that no one has ever heard of before may introduce an interface so seductive that no one can live without it. The evolution of the computer user interface is mainly still ahead of us.

About the Author

AMY D. WOHL has been analyzing, speaking, writing about, and consulting for the information industry for nearly 30 years. She is president of Wohl Associates, in Narberth, Pa., a consulting firm established in 1984. She is also editor and publisher of Amy D. Wohl’s Opinions, a weekly electronic newsletter, and she maintains a weblog at

To Probe Further

For more information on the interfaces discussed, see Actuality at, Cloudmark at, Evernote at, Factiva at, Groxis at, IBM WebFountain at, Inxight at, and Microsoft Windows Vista at

Also, Project Looking Glass is at Sun,, and Xerox PARC is at

The Conversation (0)

Drones to Attempt Rescue of Starving Dogs Stranded by Volcano

Authorities have granted permission for a drone company to try to airlift dogs on La Palma to safety

3 min read
Left:; Right: Cabildo de La Palma en directo

For the past month, the Cumbre Vieja volcano on the Spanish island of La Palma has been erupting, necessitating the evacuation of 7,000 people as lava flows towards the sea and destroys everything in its path. Sadly, many pets have been left behind, trapped in walled-off yards that are now covered in ash without access to food or water. The reason that we know about these animals is because drones have been used to monitor the eruption, providing video (sometimes several times per day) of the situation.

In areas that are too dangerous to send humans, drones have been used to drop food and water to some of these animals, but that can only keep them alive for so long. Yesterday, a drone company called Aerocamaras received permission to attempt a rescue, using a large drone equipped with a net to, they hope, airlift a group of starving dogs to safety.

Keep Reading ↓ Show less

Tech Salaries Jump in San Diego, Plunge in Dallas

The "Great Resignation" is pushing salaries up worldwide, with some regional exceptions

3 min read

Globally, tech salaries climbed an average of 6.2 percent to US $138,000, with the San Diego area clocking the biggest jump, at 9.1 percent to US $144,000. The San Francisco Bay area registered a slight decline, though average salaries there still top the charts at $165,000. New York City tech salaries slipped slight as well, but the biggest drops came in Dallas and Atlanta. The driver seems to be the "Great Resignation" currently underway: The urgent need of companies to replace departed tech employees is pushing salaries up in most areas, with just a few regions recording declines.

That's the takeaway from Hired's just-released 2021 State of Tech Salaries report.

Keep Reading ↓ Show less

How to Write Exceptionally Clear Requirements: 21 Tips

Avoid bad requirements with these 21 tips

1 min read

Systems Engineers face a major dilemma: More than 50% of project defects are caused by poorly written requirements. It's important to identify problematic language early on, before it develops into late-stage rework, cost-overruns, and recalls. Learn how to identify risks, errors and ambiguities in requirements before they cripple your project.

Trending Stories

The most-read stories on IEEE Spectrum right now