Machine Learning Tools Help Google Science Fair Finalists Find Lost Objects, Predict Breast Cancer Risk

Anika Cheerla, a Google Science Fair finalist from Silicon Valley, reviews her research in breast cancer risk assessment using machine learning
Photo: Tekla Perry
Anika Cheerla's submission to the Google Science Fair used machine learning to improve the accuracy of breast cancer risk prediction

This week, 16 teams of teens from around the world assembled in Mountain View to demonstrate the results of research projects at the Google Science Fair. You can view summaries of all the projects here.

I’ve been attending these finals for several years now and am always impressed with how creatively the teens use the technologies of today. And this year was no exception: machine learning is hot in the tech world, and the teens are embracing it.

Consider 14-year-old Anika Cheerla’s submission. A Silicon Valley girl from Cupertino, Calif., Cheerla was curious about the current state of breast cancer prediction, and discovered that prediction methods using digital mammograms are just 64 percent effective, typically simply considering the percentage of dense tissue in a breast. She developed software that considers a broader range of features, including dense and non-dense regions, and, using a database of digital mammograms from Stanford University, built and began training classifiers to use in predicting risk. She discovered that the area closest to the nipple has the highest predictive power, and her system can take that into account. Right now her system is about 84 percent effective. She is hoping to improve her system by training it on more images and adding additional machine learning capabilities.

Shriank Kanapurti, a 16- year-old from Bangalore, India, turned to machine learning to help the forgetful find misplaced objects. His approach, called KeepTab, involves a wearable camera constantly recording images of what’s in front of you, and designed software that extracts the objects from the images and figures out what they are in relation to other objects in your environment. To date he says he has trained the software on 600,000 images. He uses Google-Now’s natural language software to communicate with his system—you can say “Locate my keys,” and it will respond, “Your keys are on the television”. He’d like to eventually see his software run with less obtrusive wearables, like a future version of Google Glass.

Advertisement

View From the Valley

IEEE Spectrum’s blog featuring the people, places, and passions of the world of technologists in Silicon Valley and its environs.
Contact us:  t.perry@ieee.org

Senior Editor
Tekla Perry
Palo Alto
Advertisement