If you’re reading this article on your computer , there’s a good chance you won’t get all the way to the end. Not because you won’t find it utterly fascinating (trust me!), but because it will be hard on your eyes.
It’s not sentimentality that makes most people prefer reading books and magazines to squinting at their laptops. The quality of computer text is awful. It doesn’t have to be.
The chief problem is the low resolution of computer screens. The color LCD screens on most laptops and desktops today have a resolution of only about 100 pixels per inch. You need at least two or three times that many pixels to begin to approach the quality of the printed page. The output of even a cheap laser printer is six times as good.
What’s more, screen resolutions have hardly budged in the last several years, for a variety of reasons. For one, you’d need a lot more computational power to make a difference you could easily see on your screen. Moving from 100 ppi to 200 ppi, for instance, means your computer would have four times as many pixels to fill, and that in turn would probably bog down your graphics processor or, in a laptop, quickly drain your battery. Moore’s Law will eventually give us faster chips, and new integrated-circuit designs are getting more power-efficient. But making computer displays with higher pixel densities is also costly, because you’re more likely to get dead pixels during manufacturing.
The resolution gap didn’t matter much in the days when most people never had to read long or detailed material on their computers. But that option is becoming less and less realistic, as publishers, companies, and other organizations increasingly turn to the Web to put out their articles, manuals, and other original content. Of course, many people can and do print the material before reading it. But that’s not a very efficient option, and it’s at odds with longer-term trends in publishing (to say nothing of longer-term trends in forestry).
Analysts envision that in coming years, readers will download more and more text to compact, handheld devices, enjoying instant, inexpensive, mobile, and customized access to news, books, e-mail, reviews, directions, and all the other tidings of everyday life. In other words, we’ll all be spending more and more time reading onscreen.
As it is, the average office worker already spends the better part of the workday at the computer. It makes good economic sense, then, to improve the reading experience. Jakob Nielsen, the Web usability guru, has calculated that boosting a worker’s onscreen reading speed by just 10 percent would save his or her employer about US $2000 each year, through added productivity.
So, given that the need for a better onscreen reading experience is there, but not the pixel density, what’s to be done? Plenty. Research groups like the Advanced Reading Technologies team at Microsoft, of which I’m a member, are working hard on the problem. Type designers, cognitive psychologists, and computer scientists and engineers are collaborating to figure out ways to present text that is both aesthetically pleasing and easy to read and comprehend. The secret to better computer text, it turns out, is a heady mixture of art, science, and technology.
You may not even realize how much time you spend reading from your computer screen. You might be surfing the Web, scanning e-mail, composing a report, filling in a spreadsheet, or writing software, but what you’re really doing is reading. In fact, reading is probably the task we perform most frequently on computers. Spend too much time staring at your monitor, though, and the inevitable happens: your eyes hurt, your head aches, and your thoughts may even fog up. Eventually, you reach a point where you can’t read any further.
What is the longest document that you’re willing to read on screen? Five years ago, many people would print out anything longer than a page. These days, they may read on for several pages without feeling the need to print. The way text is presented onscreen has gotten better—not perfect, or even very good, but better.
One of the main improvements is in type design. Over centuries, type designers came up with an assortment of visual tricks to appeal to the human eye and get people to keep reading. One key quality they strive for is symmetry. Readers will perceive letters and words that aren’t symmetrical as ugly—even if they don’t notice this asymmetry consciously, it will still degrade their reading experience on some level. To achieve a symmetrical look, font designers try to keep the stem widths of letters and the spacing between letters as even as possible.
Designers also have tricks for specific letters. In grade school, for example, you were probably taught that capitals are supposed to reach from the baseline to the capital line and no further. But type designers usually ignore this rule. Rounded letters—such as O, C, and Q—often reach above the capital line and below the baseline [see figure, “Tricks of Type”]. Without that extra padding, those letters would look smaller than the rest.
Or take the letter X. What could be simpler than those two intersecting diagonals? But a typed X with perfectly straight lines looks strange—specifically, the top right arm will appear misaligned with the lower left leg [see figure, “The Secret of X”]. To compensate, type designers shift the top right arm slightly downward to give the appearance that it’s in line with the left leg.
Back when typesetting was still done by hand, letter by letter, and space by space, designers rendered such tricks in metal dies. When personal computers came along, the cramped, low-resolution monochrome CRT monitors didn’t allow for much nuance in type design, and reading from those early screens was pretty grueling.
In the 1980s, with the advent of graphical user interfaces and desktop publishing, things started to improve. Of course, taking typographic rules that had been developed over centuries for the printed page and rendering them in pixels on the screen wasn’t, and still isn’t, straightforward. At first, font producers represented each letter with a unique bitmap—essentially, a bunch of rows and columns of pixels that were either on or off. A digital font consisted of thousands of these bitmaps for every letter, number, and symbol and for every point size, style, and resolution. The Japanese font set MS Mincho, for example, contains about 128 000 embedded bitmaps.
Today’s computer fonts are more streamlined. Rather than individual bitmaps, they rely on scalable outlines of each letter. To display a letter onscreen, the font software running on your microprocessor takes the letter outline, automatically scales it up or down to the desired size, and then creates a bitmap of the letter on the fly. It takes at most 20 milliseconds from the time you strike the key to the time the letter is displayed. It’s faster and saves on disk space to create an outline for each letter, rather than calling up a discrete bitmap from memory every time you need to display that letter [see figure, “Getting Pixelated”].
The downside to using scalable outlines is that the resulting bitmap can contain rounding errors; one vertical stroke, or stem, on an ”m” might come out to be two pixels wide, while the other stems are one pixel. To correct these rounding errors, the font software includes instructions, known as hints, that subtly adjust the letter’s outline so that the resulting bitmap is even and legible. Some hints are applied across a font so that, for instance, all of the lowercase letters have the same height and all of the stems of the letters have the same thickness. Other hints are specific to a letter, or even to a letter at a specific size [see figure, “Hint, Hint,” for examples of a letter before and after hinting].
In the early days of computer fonts, each hint had to be programmed by hand; there might be an instruction to tell a single pixel to turn on or off for a specific letter at a given resolution. Some fonts in Microsoft Windows 3.1, for instance, took up about 25 000 lines of code, much of it related to hinting. With time, font hinting has become more streamlined. These days, font-hinting algorithms can dramatically cut the amount of time needed to develop a new font. Programming the initial hinting for the Windows 3.1 fonts took about 18 months and involved 18 type designersï»' and about 6 engineers. Today one designer can hint a high-quality typeface in two months or less.