Is the Keyboard Going the Way of the Typewriter?

Styluses, eye trackers, contact-lens displays, and devices that capture gestures in the air were shown at the Consumer Electronics Show

Loading the podcast player...

Legend has it that the standard distance between the rails in a railroad, the gauge that is, is 4 feet, 8 and a half inches because that’s what it was in England, and it was that distance in England because that was the gap between the ruts in the roads that the Romans built, and the ruts are that far apart because that’s the width between the wheels of a Roman war chariot.

It’s not true, of course, because the Romans didn’t even bring war chariots to England; they had stopped fighting wars with chariots hundreds of years before they ever got to Londinium, as they called it. Still, it’s an interesting idea, precisely because it’s believable—standards live forever.

In particular, sometimes it seems like we’re going to be stuck with the keyboard forever. This weird way of putting our thoughts into visible words goes back at least three centuries. The first patent for one was by Henry Mill—in England, of course—in 1714, and the first working typewriter was made in Italy by Pellegrino Turri in 1808. Interestingly enough, he made it for a blind lady friend, so apparently touch-typing predates hunt-and-peck. The Sholes & Glidden Type Writer brought us the QWERTY arrangement of letters in 1873.

Typing is still very much with us, but according to one of the 140 000 attendees of the giant Consumer Electronics Show in Las Vegas last week, the keyboard itself may go the way of the typewriter and the war chariot.

Sally Wiener-Grotta is a freelance journalist, a professional photographer, and historiographer who started writing for Spectrum last fall. Her American Hands photography project has been widely exhibited and showered with awards and grants. She joins us by phone from Philadelphia. Sally, welcome to the podcast.

Sally Wiener Grotta: Well, thank you, Steven—it’s a delight to be here.

Steven Cherry: Sally, maybe the standard width of a jet pack launcher will be 4 feet 8 and a half inches when we finally start to colonize the moon, but you saw some products at CES that led you to think the keyboard might not last for 2000 years. What caught your eye?

Sally Wiener Grotta: Well, first of all I’m delighted to say that handwriting is back in fashion. Just taking a pen or stylus in hand, and actually write to interact. There have actually been a number of studies that have said that the way that the human mind interacts with writing—with the connection of the hand and the eye to the mind, we absorb information better and we express information better using that than we do with a keyboard or tapping on a screen. I saw a technology from Visual Objects, an N-trig stylus; they’re working together, they’re also working separately, but the combination—such as on the Lenovo ThinkPad tablet—provides the ability to actually write margin notes onto documents. The device will recognize your handwriting, and it will be searchable—not just searchable after it’s been turned into type but as handwriting. Also, it has handwriting recognition for mathematical formulae and drawn geometric shapes, converting them on the fly to digital form. Really cool technology—I want this one right now.

Steven Cherry: So you’re writing right on the ThinkPad itself? How does that work?

Sally Wiener Grotta: You have a stylus in hand, a capacitive stylus, and you write on the screen itself.

Steven Cherry: Ah, very good. You also saw a product where you just use a stylus on paper—how does that work?

Sally Wiener Grotta: A pen on paper! It’s like the old-fashioned note taking in school: You have a tablet with a pen or pencil, and you just take your notes. Then there’s this device above—it’s called a receiver—and it is recording all your strokes through depth, and it maps a 3-D relationship so it can recognize your handwriting, and it beams it to an iPad—no Androids at present, though they’re thinking about it. This is from Targus, and it’s called the iNotebook and is just a prototype. But they do say it’s going to be available in the summer for about $150. And what it does is, it has a memory buffer, the receiver has a memory buffer, so if you don’t have your iPad with you at the time, it will record your strokes wherever you are and then beam it via Bluetooth to your iPad when you’re connected again.

Steven Cherry: That’s the one I want, I think.

Sally Wiener Grotta: Well, my handwriting is so bad, both devices will have to learn how I write.

Steven Cherry: Well, there is that issue. So you also saw some pretty interesting gesture-capturing technologies…

Sally Wiener Grotta: Yes, we do have true gesture input coming. There’s a lot of negotiations that are going on behind closed doors at CES—I’m pretty sure at least Samsung, probably Sony, all the big screen guys. What it does is, there’s typically a device above the large screen—it’s usually in front of a TV, though I’m sure we’re going to have it in front of computer screens also—and it has a camera and a depth sensor, so again it’s mapping 3-D space. And what you do is, typically you do a certain gesture, just like a shake of a hand, I think it depends upon the technology and how it’s programmed, but something awakens the sensor to you, and then it will track your hand or body movement. For instance, SoftKinetic had a depth camera that can take input as close as 6 inches, though Microsoft predicted that about 21 inches would be the closest. And they were showing simple operations such as changing TV channels as well as detailed involvement with a children’s video game. Another company called PrimeSense had a demo in which a model was standing in front of a screen, and she was wearing a tight-fitting leotard and then choosing from the side of the screen different objects of clothing to be shown on her body on the screen.

Steven Cherry: Yeah, the PrimeSense. They have a video on their website which is pretty cool. Besides this clothing thing, they show a couple watching TV, and I guess they’re sort of scrolling through a list of movies as if on iTunes on their TV, and they’re just moving your hand as you would on an iPad, but the hand is moving in space.

Sally Wiener Grotta: Yes, it’s something like Minority Report, remember in the movie how he was always moving things around? We don’t have the projection of the icons in the 3-D space, but we’re manipulating in the air the icons on the screen—it’s really cool to see. I imagine we are going to have some ballets based on this.

Steven Cherry: Now, you mention Microsoft, and I guess they’re involved in this space through the Kinect.

Sally Wiener Grotta: Yes, exactly.

Steven Cherry: And SoftKinetic basically considers themselves a kind of Kinect on steroids…

Sally Wiener Grotta: Yes, very well put. SoftKinetic is showing some very powerful demos on this. The children’s game, you were able to...with the depth perception within 6 inches, you could move in and out of your space and move your creatures in and out, close and far, in the background and pull them forward and such, all that other kinds of issues, playing within what felt like 3-D space as you were playing. I can imagine they were showing a children’s video probably because the complexity of the adult videos isn’t yet implemented, but I think that’s coming very, very soon.

Steven Cherry: So I’m still unclear about the 6 inches. Do we want the smaller the better or the larger the better?

Sally Wiener Grotta: The smaller the better, in the sense that the larger the 3-D space that you’re moving in and out of the range of the screen, the better. When you’re talking about children’s video games or even adult video games, being able to reach deep into your 3-D space towards the screen means a greater granularity in your control of your avatars, your creatures in the screen, so that...they’re in the background of the game, and you want to pull them forward. And I can just see the animated gestures that people could get into when they’re playing video games where they just reach really close in to the screen and pull.

Steven Cherry: So with the smaller distance, it basically means a finer granularity, it’s reading even smaller gestures accurately.

Sally Wiener Grotta: I believe that’s how it works.

Steven Cherry: Very good. You also saw a company that’s using contact lenses for eye tracking.

Sally Wiener Grotta: Well actually, there’s two different issues here. One is, there’s a company that’s using contact lenses for truly immersive experience, and I thought when I first saw it that it was eye tracking, but you know how we’ve been seeing for decades now the screens on glasses. They’ve now put them into contacts so that supposedly—I didn’t volunteer for a demo on this one, and I don’t think they were offering it—these contacts have the screen projected right into your eyeballs. And they are claiming that in a game or movie, you can have a truly immersive experience where you feel like you are in the screen, and supposedly you’re able to still see the real world. So I can imagine some schizophrenic kind of feeling on this. This is from Innovega; it was a start-up. I think it’s a prototype; it may not even be prototype already, but it’s a fascinating concept, and I could see some people really wanting them. I don’t think I’d be comfortable with them; though I wear regular contacts, I don’t think I’d want to put screens into my eyes.

Steven Cherry: Well, I want this one....Spectrum published a science fiction short story some years ago by Verner Vinge where he envisions pretty much just that, and with sort of subvocal talking to yourself, you actually do Google searches and you see the results in this little contact lens head-up display thing.

Sally Wiener Grotta: Oh, I do believe the subvocal I’m sure will come. There was eye tracking from Tobii, and they actually have a first eye-tracking game, and they use pupil center corneal reflection, and they’re actually reading your cornea. And before the session starts, they’re showing it as a game, but I’m sure it will also be useful for data input and data manipulation. They use a calibration system of two-, five-, or nine-point calibration to record where your eyes are and how they move around. And then just using your eyes looking in different places on the screen and resting on an icon and then pulling it along with your eye. This is very real, it’s in a real game; I think we would have to be trained. To put this against the concept of handwriting which is so natural to us, I think the eye tracking is going to be an artificial kind of environment that some people will take to well and others, they just won’t be able to function with it.

Steven Cherry: Fair enough. Sally, I mentioned your American Hands project at the top of the show. To paraphrase your website, it’s a visual celebration of the blacksmiths, weavers, spinners, glassblowers, bookbinders, rug makers, and so on among us. You take these gorgeous photographs that focus on their hands as they work their craft. I guess I’m curious whether you think keyboarding will go the way of blacksmithing: It’ll be a rare, if not completely dying, art practiced by professionals such as you or me, but for most people keyboarding will just be something that they see in period movies.

Sally Wiener Grotta: I am horrified by the concept as a writer because I have learned to compose on the keyboard, my mind has already been rewired to the keyboard, but I do believe it is so. I believe actually that direct screen input is inevitable except for dinosaurs like us. I was just spending time with my nephew who is one year old, and I can imagine he is not going to even understand the way a keyboard works, just the way a rotary dial on a phone he would never understand.

Steven Cherry: That’s a good comparison. Sally, you have the eye of an anthropologist as well as a photographer, and I think that gave you a unique perspective on that giant trade show in Las Vegas. Thanks for sharing it with us today.

Sally Wiener Grotta: Thank you. I enjoyed it very much, Steven.

Steven Cherry: We’ve been speaking with freelance photographer and journalist Sally Wiener Grotta about some new ways we’ll be interacting with our computers, smartphones, and tablets in the years to come.

For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

Announcer: “Techwise Conversations” is sponsored by National Instruments.

This interview was recorded 17 January 2012.
Audio engineer: Francesco Ferorelli
Follow us on Twitter @techwisepodcast

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

Advertisement
Advertisement