A decade ago, a group of crop scientists set out to grow the same plants in the same way. They started with the same breeds and adhered to strict growing protocols, but nonetheless harvested a motley crop of plants that varied in leaf size, skin-cell density, and metabolic ability. Small differences in light levels and plant handling had produced outsize changes to the plants’ physical traits, or phenome.
The plunging price of genomic sequencing has made it easier to examine a plant’s biological instructions, but researchers’ understanding of how a plant follows those instructions in a given environment lags. “There is a major bottleneck for a lot of breeders to be able to get their phenotypic evaluation in line with their genetic capabilities,” says Bas van Eerdt, business development director at PhenoKey, in ’s-Gravenzande, Netherlands.
Breeders would like to be able to know whether a plant—or better, a whole crop—is growing on track and how it’s responding to local weather conditions, by observing the way it grows. Now, with cheaper sensors and more powerful artificial intelligence algorithms, researchers are inching closer to that goal. Their hope is to make the typical 1.3 percent annual yield improvement in crop production look more like Moore’s Law.
The go-to technique for this work is still optical imaging. Some researchers are now writing software to allow growers to use smartphone cameras to quantify some parts of a crop’s phenotype. But they are also adapting an array of more sophisticated imaging technologies from aerospace and biomedical physics to the field. Breeders in North Carolina and the Netherlands are using drones and greenhouses equipped with hyperspectral, fluorescent, and tomographic sensors to quantify more of their crop’s phenomes.
Hyperspectral imaging can reveal hidden damage from insects. Magnetic resonance imaging (MRI) can detect droplets of water as a seed absorbs it and follow the seed through germination and other stages of development. Positron emission tomography (or PET scans) allows researchers to peer through soil into flower bulbs and visualize the layout of a plant’s root system.
The European Union spent some €250 million (about US $300 million) between 2005 and 2015 on plant phenotyping research infrastructure, and American crop giants and government agencies are spending millions on this research alongside major breeding companies such as Syngenta and Bayer.
In the past, evaluating a new crop variety required breeders to visit every plant in a test plot, take detailed notes, and rank all the plants for the next round of breeding. “This is actually the limiting factor in the experiments that we run,” says roboticist and business developer Rick van de Zedde of Wageningen University & Research, in the Netherlands. “It’s not really about the cost; it’s more about the enormous amount of time that it takes.”
Instead, PhenoKey annotates thousands of images of test crops, adding labels to identify characteristics such as flower bud count and leaf shape. The company uses these annotations to train its artificial intelligence software on the traits of a specific type of plant. In one case he presented a few years ago, van Eerdt says, a breeding company spent no more than 50 person-hours improving an image analysis algorithm so that it could detect orchid buds with 95 percent accuracy in a greenhouse full of plants—about a twentieth of the time it took to describe the plants manually.
Van de Zedde won €22 million in 2018 to build a new Dutch national phenotyping research facility that joins a small but growing number of facilities around the world.
The ultimate goal, van Eerdt says, is to combine automated phenotyping with automated genomics screening. “If you have a deep understanding of how your genetics work…and a model that predicts phenotypic outcomes, then in theory, it’s possible to predict how your crop will look,” he says.
This article appears in the January 2019 print issue as “Automated Eyes Watch Plants Grow.”