In fifth grade, while Rob Sinclair was tutoring children with learning disabilities, he discovered a lesson that would shape his career. ”I started to understand that there were people who learned in different ways,” he says, ”that people with different abilities had completely different requirements.”Sinclair soon realized that disabilities could really set people back in today’s world, where technology infuses our daily lives. For those who have difficulty using a mouse, seeing, or hearing, even such straightforward computer tasks as checking a bank balance or sending e-mail can be challenging. But he found that technology can also help those people, transforming their lives—if it is applied carefully and thoughtfully.
As director of the accessible technology group at Microsoft, Sinclair is now in a position to improve millions of people’s lives. Since becoming director in 2005, he has been spearheading the company’s efforts to make computer software and devices more usable for people with physical or learning disabilities. Under his leadership, Microsoft has packed the Windows Vista operating system—which is scheduled to be released this month—with beneficial new features, including enhanced screen magnification, voice control, and dictation, plus improved compatibility with third-party assistive technology products.
But Sinclair has a loftier long-term goal for assistive technology: making computers more user-friendly and accessible for everyone, whether or not a person has a disability. ”Today we humans continually adapt ourselves to the technology that we’re using,” he says. Instead, the goal should be ”that the technology should learn how to adapt to humans.”
Sinclair grew up in Irving, Texas, and got bachelor’s and master’s degrees in computer science at New Mexico State University, in Las Cruces. After getting his master’s in 1997, he started to build a broad set of skills through various software development and management positions—including writing training software for the U.S. Air Force—before joining Microsoft’s premier support group, which provides business and technical assistance to customers. He moved to the accessible technology group a year later as a program manager and went on to hold various positions running the design and development teams.
In 2004, Sinclair’s passion for nature and wildlife photography led him to switch to Microsoft’s digital photography group, and he worked in that unit until the company asked him to return to the accessible technology group as its director.
He says he clearly remembers the first time he saw computers changing the life of someone with a disability—one of his college professors, who suffered from Parkinson’s disease. The professor was using an assistive technology input device, one similar to the screen of today’s tablet computers, but clunky, expensive, and not portable. It fed data into a computer sitting on the professor’s desk and converted his gestures into actions such as mouse clicks or print commands. Soon after joining Microsoft, Sinclair realized that the system his professor had used was unnecessarily complicated, in addition to being unwieldy.
Windows and other operating systems such as Linux and Mac OS X have long had some built-in accessibility features—including screen magnifiers, text-to-speech converters, and keyboard control of the mouse—but these features are often insufficient on their own to meet the needs of users with particular disabilities. Therefore, many such users rely on third-party software and assistive technology devices for a more enhanced set of features.
Back in the early 1990s, there was no standard method for a tablet input device or any third-party assistive technology device to easily communicate with, say, an e-mail program. Instead, operating system and assistive technology developers spent a lot of time and resources making the two compatible on a piecemeal basis.
Sinclair is hoping to bridge that gap with the Microsoft User Interface Automation model, of which he is one of the masterminds. The first version is included in Vista and through it, the OS and various applications exchange information that lets any assistive technology talk to any software application, he says.
For example, a screen reader can ask a word processor what is happening on the screen and read it out loud using synthesized speech. Or, using the speech-recognition software, you can talk to your computer and command it to open up an eâ''mail application, transcribe a dictated message, and send it.
”The idea about this is that there has to be some common way of exposing information from an application so that other applications can get to it,” Sinclair says. ”It allows developers with special expertise to build the speaking application and developers who really understand e-mail to build the e-mail application.”
Sinclair says that user interface (UI) automation is one of the most significant technologies he has worked on. It could not only increase compatibility between PC-based assistive technology devices and software applications, it could also provide easier, consistent access across different computing platforms. Sinclair is now in the middle of talks with other OS developers to broaden UI automation into a standard.
Eventually, through UI automation, Sinclair is looking to expand accessible design into what he calls ”design for all.” After all, most people, whether or not they have a disability, can reach a point where they are not at 100 percent of their abilities because of a number of factors, such as age, injury, or fatigue. ”If we’ve been reading for 15 hours, our eyes are probably getting tired, so we start to slide down the scale in terms of visual acuity,” says Sinclair, who got firsthand experience using Vista’s new speech-recognition software while he was recovering from a shoulder injury a few months ago.
Creating user-friendly technologies to ease a person’s tasks is not new to Sinclair. His entrepreneurial skills kicked in when he was starting graduate school in 1995. He teamed up with three other people to create a consulting company that built customized software for people and businesses. His work involved going to a workplace to understand the work flow and then streamlining it with the right fit of technologies. For his master’s, he specialized in usability and user-centric design.
Designing user-centric software and assistive technologies draws upon the same principles even though the applications are very different, Sinclair says. ”You’re trying to find a way of optimizing the input and output of the system for the human who is interacting with the technology so that he or she can get the work done quickly and efficiently and then move on to something else.”
One day, he says, he hopes to create an intelligent computer system that can adapt to every user’s needs or preferences. As an example, he describes a situation in which he is outside on a sunny day and is having difficulty seeing the display on a portable computer screen. He would like the device to sense the sun’s glare and immediately ”start speaking [about something on the screen] instead of just showing me.”
But he is realistic about the goal of creating intelligent, accessible computer systems, acknowledging that they are a long way away. His team at Microsoft can play a key role in leading the effort, but he says it will take the cooperation of the entire group of technology, industry, and research communities to achieve that goal.
About the Author
Prachi Patel-Predd, a regular contributor to IEEE Spectrum, is a freelance writer who covers technology, energy, and the environment.