The December 2023 issue of IEEE Spectrum is here!

Close bar

Generation Smartphone

The smartphone’s role as constant companion, helper, coach, and guardian has only just begun

8 min read
generation smartphone
Photo: Dan Saelinger

photo of family using smart phonesPhoto: Dan Saelinger; Stylist: Maria-Stefania VaVyloPoulou

It’s the year 2020 and newlyweds Tom and Sara are expecting their first child. Along with selecting the latest high-tech stroller, picking out a crib, and decorating the nursery, they download the “NewBorn” application suite to their universal communicator; they’re using what we’ll call a SmartPhone 20.0. Before the due date, they take the phone on a tour of the house, letting the phone’s sensors and machine-learning algorithms create light and sound “fingerprints” for each room.

When they settle Tom Jr. down for his first nap at home, they place the SmartPhone 20.0 in his crib. Understanding that the crib is where the baby sleeps, the SmartPhone activates its sudden infant death syndrome (SIDS) application and uses its built-in microphone, accelerometers, and other sensors to monitor little Tommy’s heartbeat and respiration. The “Baby Position” app analyzes the live video stream to ensure that Tommy does not flip over onto his stomach—a position that the medical journals still report contributes to SIDS. Of course, best practices in child rearing seem to change quickly, but Tom and Sara aren’t too worried about that because the NewBorn application suite updates itself with the latest medical findings. To lull Tommy to sleep, the SmartPhone 20.0 plays music, testing out a variety of selections and learning by observation which music is most soothing for this particular infant.

As a toddler, Tommy is very observant and has learned the combination on the gate to the swimming pool area. One day, while his parents have their backs turned, he starts working the lock. His SmartPhone “Guardian” app recognizes what he is doing, sounds an alarm, disables the lock, and plays a video demonstrating what could happen if Tommy fell into the pool with no one else around. Not happy at being thwarted, Tommy throws a tantrum, and the Guardian app, noting his parents’ arrival, briefs them on the situation and suggests a time-out.

While this scenario is, of course, science fiction, many of the technologies I’m describing are here today in research labs or even in app stores. So the reality of a SmartPhone 20.0, along with its envisioned NewBorn suite, are not far off.

Geo-fencing, for example, is already a standard part of the iPhone operating system. Several smartphone apps can use GPS to identify the user’s location and bring up targeted advertising. Research focusing on improving location accuracy indoors, as well as software that “fingerprints” ambient light and sound, will soon make these apps able to accurately identify rooms in a house. Today, wearable SIDS monitors detect a baby’s breathing motion and vibrate when it stops, to stimulate the baby to take another breath—for example, the Halo, from the South African company Snuza. Researchers at the Technical University of Munich and others are testing emotion recognition from audio.

But the SmartPhone 20.0 won’t be just a high-tech baby monitor. Rather, the device or smart mobile devices like it will serve as nanny, nurse, or golf caddy—the perfect assistant for people of all ages. If you think that people can’t seem to make a move without consulting their phones today, well, you ain’t seen nothing yet.

Let’s age Tommy to 3 years old. Tom and Sara take him skiing for the first time. Tommy’s SmartPhone, now version 23.0, downloads the “Virtual Skiing Coach,” which uses accelerometers sewn into Tommy’s clothing to sense his posture and then offer suggestions for maintaining balance; when it foresees an impending collision, it quickly blurts out instructions on how to stop. We already have basic sensor-based virtual coaches. For example, the InForm Exercise Coach for knee osteoarthritis [PDF] uses accelerometers and gyroscopes to track motion during rehabilitation exercises and correct errors. Such coaches would enable therapists to remotely monitor home-based exercise, making it easier for seniors to remain at home as they age and reducing health-care costs. These virtual coaches learn, so the longer people use them, the better they work.

At age 5, with the SmartPhone 25.0 education apps, Tommy has become a curious and eager learner. He looks forward to his first day at kindergarten. He meets Alice, who can neither hear nor speak, but because of her SmartPhone, she is able to easily participate in class. Alice greets Tommy by signing, and her SmartPhone plays a translation provided by the American Sign Language (ASL) app. Tommy responds, and Alice’s speech-recognition app provides her with real‑time captioning. Tommy shares his favorite song with Alice, sending it from his SmartPhone to hers, which translates the music to vibrators in a vest she wears.

Back in 1997, Thad Starner of Georgia Tech put a camera in the bill of a baseball cap to enable a computer to interpret a user’s ASL gestures and display the results on a smartphone; no such product has yet been released commercially. But some ASL apps already exist: For example, the SmartSign from Georgia Tech, designed for hearing parents of deaf children, allows them to call up a video of a sign by speaking the English word. The VibeAttire vest, which lets wearers feel music through vibrating motors sewn into the fabric, debuted at the 2010 International Consumer Electronics Show, or CES.

One day, Tommy is walking home from school, and the SmartPhone 27.0 Guardian app notices that a stranger has started a conversation with him and is coaxing Tommy to get into a van. The Guardian app whispers in Tommy’s ear not to talk to the stranger and tells him to run to a nearby house, one the app has already verified as a local kid-safe house and confirmed that someone is home. The Guardian app takes a picture of the stranger and the license plate of his van and forwards the information to the police.

First Person Vision, introduced at the 2011 CES, uses video taken by wearable cameras and smartphones to identify gestures, actions, and faces in real time. It’s not much of a stretch to envision it alerting users to threats.

For Tommy’s 16th birthday, his parents download the “Driving Instructor” app. Of course, by 2036 cars have many safety features but still require the driver to take over in emergency situations, so a driver’s license is still required. Under the tutelage of the app, Tommy becomes an excellent driver; his parents trust that they’ll be alerted if he starts driving recklessly.

These kinds of driver-monitoring tools are now in the lab. For example, the DriveCap project at the Quality of Life Technology Center, in Pittsburgh, run by Carnegie Mellon University and the University of Pittsburgh, uses in-car sensors to track driver behavior (accelerometers can detect erratic maneuvers and sudden changes in braking and acceleration) and the driver’s cognitive load—that is, how attentive, tired, or overwhelmed the driver is—by focusing a camera on the eyes.

Years later, Tom Jr.’s SmartPhone (upgraded, of course, many times over the years) continues to be a trusted companion. On a business trip, the “Administrative Assistant” app reminds Tom of people’s names and their connections to him; this is an easy-to-imagine extension of First Person Vision. Tom has an appointment in a large building complex, which has a confusing maze of corridors and bridges between buildings. Tom’s SmartPhone snaps pictures for comparison to an archive of pictures of different parts of a building; that’s something the First Person Vision app already does. By locating his position on a floor plan and knowing his destination, the “Building Navigation” app can efficiently guide him to his meeting. Applications like this already exist; the simplest are based on indoor maps developed by Google Places for Business.

On one trip, Tom twists his ankle while jogging. His SmartPhone directs him to the nearest emergency room; iPhone 4s users are already familiar with the Siri app’s ability to do this kind of location finding. Later the SmartPhone recognizes that Tom is using his crutches incorrectly and gives him some pointers. While a “crutches coach” is not currently on the market, similar coaches have been demonstrated in the field. People who use manual wheelchairs are susceptible to repetitive-use injuries to their wrists and shoulder rotator cuffs. Researchers at Carnegie Mellon and the University of Pittsburgh have tested accelerometers in a wristwatch-like bracelet that classifies the arm movements and encourages those patterns that generate the least stress on the wrist and shoulder. Powered wheelchairs are being used to test more sophisticated built-in sensors to help users with spinal cord injuries avoid developing pressure sores by making sure they shift positions frequently; these devices have also been tested at the two Pittsburgh universities.

Tom wants to give his son, Thomas III, some of the tips he learned from his father on how to swing a baseball bat. So Tom uses a virtual coaching tool kit to develop his own baseball batting app. The app uses one set of sensors to recognize the type of pitch and another set, worn on the body, to analyze the batter’s reaction. Tom doesn’t stop there; he builds several coaching apps, including some for camping skills, gardening, household repairs, and automobile maintenance.

Tool kits already exist for simplifying the development of applications that augment reality—for example, the ARToolKit, an open-source project supported by the University of Washington; the University of Canterbury, in New Zealand; and ARToolworks, in Seattle. To pass along his father’s life lessons, Tom records video of his father answering a variety of questions. In years to come, Tom’s son will ask questions, which the SmartPhone’s speech recognizer will match with an automatically generated index of the video clips, letting the grandson have simulated conversations with his grandfather.

child with smart phone

Photo: Dan Saelinger

Researchers at Carnegie Mellon have already used such synthetic interviews to enable people to converse with historic figures like Albert Einstein or Charles Darwin (played by actors) and with actual teenage breast-cancer survivors.

As Tom ages and his cognitive capacity decreases, he becomes less able to make critical decisions as he is driving. In particular, his decreased night vision makes it difficult to judge the speed of oncoming traffic. His SmartPhone “Driver Capability” app notices his hesitation and instructs the car’s navigation system to use only intersections with left turn signals or to plan right-turn routes after dark. As Tom exhibits characteristics of early-onset Alzheimer’s, he takes advantage of MemeXerciser, developed years ago in Pittsburgh, which collects snapshots and audio clips from a camera and microphone worn as a pendant. After an event, such as a trip to the zoo, with the aid of a video-editing tool kit, a caregiver can create an audio/video summary of the event, which Tom can replay at his leisure. Studies at Carnegie Mellon’s Human-Computer Interaction Institute have demonstrated that MemeXerciser helps improve recall, not only of a particular event but also of activities that have not been recorded.

Even later, Tom’s declining health requires ever more monitoring by his doctor. Fortunately, Tom’s SmartPhone Health app allows his doctor to request routine self-monitoring tests using sensors built into the phone. The app administers these tests to Tom according to a schedule set by the doctor, who can review test results and order additional tests if necessary. The Health app also monitors Tom’s activities and notifies the nurse in the independent living complex where he now resides if there are any anomalies.

Today, “health kiosks” [PDF] that perform these functions are already in use in workplaces and senior living centers. United Healthcare, for example, offers them to its clients. It’s not much of a stretch to imagine those functions moving to the smartphone. Already, new applications like VitalClip, an iPhone accessory soon to go into a private beta test, allow users to measure vital signs by touching a finger to a sensor.

Tom’s SmartPhone has captured his eventful life through video and audio, automatically divided into segments and indexed for textual search through speech-to-text conversions. Virtual coaches and synthetic interviews capture his hard-won wisdom and archive it for posterity. Later his son, and then his grandchildren, can tap that wisdom with Tom as one of his family’s virtual companions, guardians, and coaches.

The apps that help Tom throughout his imaginary life are all straightforward extrapolations from what exists today. But technology isn’t always bound to a straight path. In the future, the SmartPhone and smart communicators like it will decrease in size until sensing and computing is simply part of everyday objects, integrated into the outer “skins” of devices, woven into clothing, and embedded into countertops. This integrated technology will be situationally aware, understanding the user’s intent and jumping in to help without a touch or a voice command. The Tommys of the future will be protected by helmets and uniforms that anticipate potential concussion-causing collisions and quickly react with counterforces that minimize bruising of the brain. Their footballs will signal “first down” from the bottom of the pile of players—no human judgment necessary. Their kitchens will figure out what meal is being made as ingredients are pulled from the refrigerator and step-by-step preparation instructions are displayed on the countertop. All this technology will have a zero carbon footprint, as it scavenges energy from radio waves in the environment and biodegrades when it is discarded. And we can see this future reflected in today’s smartphones.

About the Author

Dan Siewiorek is a professor at Carnegie Mellon University. He is also director of the Quality of Life Technology Center, which is run jointly by CMU and the University of Pittsburgh and sponsored by the National Science Foundation. He is developing virtual-coaching applications for smartphones—precursors, he hopes, of the ones he describes in “Generation Smartphone.” While his two daughters were growing up, he fantasized about apps like these, which might have saved him much consternation.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions