To help children with autistic spectrum disorder improve their social skills, researchers from the University of Kentucky have developed a prototype of a social narrative and gaming system, called MEBook.
A Microsoft Kinect camera wired to a PC tracks a child’s facial expressions, body movements and other behavioral patterns. The system’s current instantiation uses video self-modeling (VSM), an evidence-based approach in which kids watch videos of themselves successfully performing social behaviors, such as waving or smiling, during the intervention. Video footage of the successful moments are spliced together and reviewed with the child afterward. Researchers hope to release a free, downloadable version of MEBook for parents to use at home by the end of this year.
“The incorporation of the gaming system encourages the child to practice what s/he has learned in the social narrative by rewarding the correct behaviors with points and praises,” says Sen-ching Samson Cheung, an associate professor of electrical and computer engineering at the University of Kentucky.
Inspired by his own son’s autism spectrum disorder, Cheung is leading this project and searching for ways to help others affected by the same disorder. “Since his diagnosis six years ago, I have been thinking of different ways to apply my research to improve autism diagnosis and interventions,” he says.
The problem with teaching autistic children using social narratives such as animated stories illustrating social situations, Cheung says, is that kids on the autism spectrum have difficulty relating the social behavior in fictitious scenarios to those that occur in real life. The researchers believe that showing “real” videos of the child himself, as the main character, performing precise behavioral patterns, will help him or her relate those patterns to real life. Eventually, this helps the child build confidence for similar social environments.
Nkiruka Uzuegbunam, a Ph.D. student collaborating with Cheung on this project, talks about the system in the video below.
The system uses computer vision and signal processing algorithms to separate the subject from the background and to identify when certain behaviors emerge. Some of these algorithms are based on the researchers’ previous work, in which a video surveillance system protected the privacy of certain individuals by detecting them and erasing them from the video footage.
Cheung says they’ve already finished a preliminary clinical study utilizing the prototype system with three autistic children. “The results are very encouraging; all of our subjects showed an increase of social greeting skills after the intervention,” he says. His team is currently summarizing the results for an IEEE journal.
His team of collaborators in education, psychology, and medicine are also in the early alpha stage of developing a virtual-mirror system. They are looking to start clinical studies on it in 2016. With that system, the child’s behavior is captured and modified in real-time, and the image is rendered on a mirror-like display. This is part of a four-year NSF grant to apply advanced multimedia technology to enhance “self-model and mirror feedback imageries” for behavior therapies for children with autism.
Theresa Chong is a video host and multimedia technology journalist based in Palo Alto, Calif. As on-camera talent, she has performed science experiments for “Discovery News,” explained how virtual reality works for USA Today, and interviewed Adam Savage for IEEE Spectrum. She has written about wearables for Scientific American and travel tech for Architectural Digest. With a DSLR, GoPro, and green screen by her side, she has produced digital videos of robots, driverless cars, and 3D printing. She earned a master’s degree from Northwestern University’s Medill School of Journalism, and in a prior life she worked as a civil engineer.