Tech Talk iconTech Talk

Goal Detection Technology for the Other Football

Last Sunday, during the World Cup game between France and Holland Honduras, FIFA's goal line detection technology passed its first field test. It was early in the second period when the French striker Karim Benzema nailed a shot right into the post that sent the ball blasting back across the goal line and bobbling into the hands of the Honduran goal tender. The crowd gasped and then went silent. It wasn't immediately apparent whether the ball had crossed the line.

Plays like this are decided by a lone referee on the field, and in the past, whether he made the right call depended entirely on whether he had a good line of sight through a throng of frenzied athletes (this 2010 World Cup goal goes to show that they don't always get it right). Now, however, the refs are getting backup from a system of high speed cameras that follow the ball in 3-D and send an alert when it crosses the goal line. Seconds after Benzema made his attack, the ref received a vibrating signal on his smartwatch, a point went up on the board, and the crowd broke into a jubilant chant. 

FIFA's goal line technology has improved the accuracy of point scoring to such a degree that one has to wonder whether other sports besides soccer might benefit from similar ball tracking systems. Now, it seems, American football may provide the next test case. Engineers at North Carolina University and Carnegie Mellon have teamed up with Disney Research to design a sensor-based system for determining the location and position of a football on the field and are beginning to test its accuracy. They describe the technology in IEEE's Antennas and Propagation Magazine.

American football presents certain challenges that do not exist with soccer. Sometimes, the most  important calls a ref can make happen on plays where the ball is buried beneath a scrum of sweaty, 150-kilogram men. In such cases, a visual detection system, such as the one now being used at the World Cup, would be useless. 

Instead, the NCU researchers are opting to send radio signals from transmitters hidden within the layers of the football and track it with receivers positioned around the field. According to the paper, engineers have tried similar approaches in the past, but ran into trouble because they were using a high frequency wave, of a kind that is easily absorbed by the human body. The researchers claims that results from these previous attempts failed because they did not produce a clear enough signal. 

This time around, engineers are using extremely low frequency radio waves (producing a wavelength that is hundreds of meters long) that are able to pass through the human body unmolested. The design they came up with requires that a transmitter and battery, weighing less than 30 grams, be embedded into the football. Eight receivers, placed on the perimeter of the field then calculate the location and orientation of the ball in 3-D.

Unfortunately, using low frequency waves solves one problem while creating another. This time, the interference comes from the ground which absorbs the signal and then re-emits it in a pattern called an eddy-current. In order to reduce the noise from this secondary signal, the engineers rigged the field with a set of optical instruments. While the receivers are keeping tabs on the signal transmitted from inside the football, the optical instruments collect information about the position of the receivers. This data can then be used to scrub out the signals coming from the ground. 

According to the researchers, their technique must be accurate to within half the length of a football—about 14 centimeters—in order to be considered reliable. Right now they report having it down to 77 centimeters.

For those of you wondering how the performance of a football might change after it's been stuffed with sensors, David Ricketts, one of the authors of the study, says that footballs are actually rather asymmetrical objects to begin with. "The American football is already unbalanced," he explained by email. "The laces cause a weight mis-distribution, so they counterweight it to compensate. The added weight of the transmitter would be handled in the same way."

If you'd like to see how the tracking system would work in a real game, the group has posted a video of their trials here.

Editor's note: The game referenced in the first sentence was between France and Honduras, not Holland as originally reported.

Algorithms Outperform Diabetics at Blood Sugar Control

Doing math or any other mental activity when you have low blood sugar is a recipe for errors. For sufferers of Type 1 diabetes, who must track and adjust their own blood sugar levels, it can mean more than inability to focus. It can be fatal. Yet a small real-world study announced Sunday at a meeting of the American Diabetes Association (ADA) in San Francisco offers hope that software could monitor blood sugar levels and adjust insulin levels for them, even outside of controlled settings such as hospitals.

The study's authors conducted parallel 10-day trials with 52 participants. One trial tested adolescents at a summer camp. A parallel trial consisted of adults with only minor restriction on their activity or diet.

Read More

Quantum Cascade Laser at the Heart of Spectrometer on a Chip

Mid-infrared spectrometry can assist chemical sensing, environmental monitoring, and disease diagnosis. Conventional mid-infrared spectrometers are highly developed, but sometimes bulky, assemblies: a broadband thermal emitter, an interferometer, a separate broadband detector, external optics, and a sample cell, feeding in to Fourier-transform analyzer to break the signal down and analyze absorption at a variety of frequencies.

Sometimes, though, you need quick answers when far from the laboratory. A research team at the Technical University of Vienna (TU Wien) is designing a single-chip device to meet what the leader of the team calls the “challenging task” of making “mid-infrared spectroscopy accessible to remote areas, where conventional power supply and laboratory equipment” are rare or nonexistent.

Benedikt Schwarz, lab chief Gottfried Strasser, and their colleagues at the university’s Institute for Solid State Electronics and Center for Micro- and Nanostructures have built an easy-to-fabricate lab-on-a-chip that integrates a mid-infrared (6.5-micrometers wavelength) laser, a plasmonic waveguide (which also functions as a sample chamber), and a detector into a device that can identify components in gases or liquids by their absorption signatures.

The laser and detector are, in fact, essentially the same device, a bi-functional quantum cascade laser/detector (QCLD). The QCLD can be switched from emitter to receiver by changing the applied bias, so the same fabrication process can produce both structures at the same time.  

 A conventional laser emits light when an electron crosses a band gap in a relatively homogeneous semiconductor to annihilate a hole—one hole, one photon. The quantum cascade device consists of nanometers-thin alternating layers of two materials. (In this case, 26 layers of indium-aluminum-arsenic alternating with indium-gallium-arsenic). In the QCLD, electrons can make an energy transition, emit a photon, and then tunnel into the next layer to emit another, creating the cascade and delivering a stronger signal.

The TU Wien group has published a good deal about the construction and control of quantum cascade lasers (see contributions to Applied Physics Letters and the IEEE Conference on Lasers and Electro-Optics), but the current work, reported in Nature Communications, focuses on the waveguide ”sample chamber.”

The waveguide, a stripe of silicon nitride laid down (by molecular beam epitaxy) between laser and detector, captures the laser beam as a plasmon, a wave that is confined in a restricted region stretching from just below the surface to a small distance above it. About 96 percent of the beam’s energy is confined in a layer above the stripe surface, and varying the stripe’s dimensions changes the parameters of the plasmon.

Schwarz and his colleagues evaluated stripes 15 micrometers wide, up to 500 nanometers thick, and 5, 50, and 100 micrometers long. In particular, the thicker the SiN layer, the more tightly the beam is constrained to the area above the stripe’s surface: while the beam might reach up to a height of 50 micrometers on a raw gold surface, it is compressed into a layer about one-tenth that thick over a 200-nanometer-thick bed of silicon nitride.

The waveguide tremendously boosts the coupling efficiency—the ratio between the energy received at the detector to the energy emitted from the laser. Without a waveguide, a laser and detector separated by 100 micrometers of raw gold has a coupling efficiency of almost zero. With the waveguide, the coupling efficiency can climb to about 40 percent.

This compression of the laser creates a virtual sample cell—one without walls, open to the environment, and operating at room temperature. A drop of fluid or a breath of gas can be sampled by bursts of 6.5 micrometer radiation, each 100 nanoseconds long, repeated 5000 times a second.

By gauging the amount of energy the sample absorbs, the single-chip spectrometer can measure its composition. The researchers tested their device with single drops of ethanol-water solution. In about a tenth of a second (since resolution increases with the square root of the sampling interval), the device could correctly gauge the ethanol concentration with a resolution of about 0.06 percent over the wide range from 0 to 60 percent (or, as I think of it, spring water to 120 proof).

With its wide dynamic range, high resolution, relatively simple and inexpensive fabrication, and single-chip robustness, the “monolithically integrated mid-infrared lab-on-a-chip” may be able to break analyses out of the lab and get them into the field.

Photo: TU Wien

U.S. Military Wants Laser-Armed Humvees to Shoot Down Drones

Laser weapons mounted aboard U.S. Navy ships and large trucks have already shown the power to shoot down flying drones during test trials. That early success has encouraged the U.S. military to fund a new effort to develop smaller versions of these anti-drone weapons that can fit light ground vehicles such as the military Humvee.

Read More

Memory Cells Built on Paper

A team based at the National Taiwan University in Taipei has used a combination of inkjet and screen printing to make small resistive RAM memory cells on paper. These are the first paper-based, nonvolatile memory devices, the team says (nonvolatile means that the device saves its data even when it's powered down). 

As Andrew Steckl outlined in his feature for IEEE Spectrum last year, paper has a lot of potential as a flexible material for printed electronics. The material is less expensive than other flexible materials, such as plastic. It boasts natural wicking properties that can be used to draw fluids into sensors. And it can be easily disposed of by shredding or burning.

Basic circuit components, such as wires, resistors, capacitors, transistors and diodes, have been printed on paper. But memory is one of the last frontiers, says graduate student Der-Hsien Lien, and it will be needed if we expect paper electronics to perform computation and store data.

Lien and his colleagues tackled the problem by setting out to build resistive RAM, or RRAM, memory cells. In this memory, the cell is sandwiched between two electrodes. An applied voltage pulls ions from one of the electrodes in the cell, which lowers the cell's resistance.

Image: Der-Hsien Lien et al. An RRAM cell printed on paper. The layers, from bottom to top, consist of paper, carbon, titanium oxide, and silver.

In this case, the team constructed memory cells first using screen printing to coat paper with a layer of carbon paste that would serve as the bottom electrode. An inkjet printer was then used to print lines consisting of titanium oxide nanoparticles. After those lines had dried, they formed top electrodes by depositing small dots of silver atop the nanoparticles.

The team conducted various tests to confirm the cells could switch between states by applying a positive or negative voltage and performed reliability tests to confirm that the memory cells retained their behavior even after the paper had been bent. The results were presented last Wednesday at the Symposium on VLSI Technology in Honolulu, Hawaii. 

Lien reported memory cells as small as 50 micrometers. These could potentially be packed together to store about 1000 bits per centimeter, which amounts, Lien says, to about 1 MB on a single side of a sheet of standard A4 paper. But the team reckons better inkjet printers, which are now capable of printing submicrometer features, could increase that memory capacity to 1 GB.

The capacity could go further still by building memory cells at the intersections of crossed lines (an approach that's dubbed crossbar memory). Crossbar structures should prove easier to stack, which means that even more cells could be packed in a given area in three dimensions, says Jr-Hau He, one of the team leaders.

The team is now looking for a partner who can help build the electronics associated with storing and reading information in the memory cells.

World Cup or World's Fair? Technology Takes Center Field at the Games in Brazil

Every four years, 32 countries from around the world send their fiercest, most battle-ready soccer players to compete in the sport's most celebrated international event: the World Cup. This year, the world has sent its finest innovations as well. The shoddy arenas in Brazil may themselves be a source of shame (during the first period of yesterday's opening match between Brazil and Croatia, about half the lights in the São Paulo stadium flickered and fizzled) but the fields and stands are brimming with technology.  

Some of it is on full display. The opening kick, which is not traditionally known for being a high tech moment, took on great significance yesterday afternoon when Juliano Pinto—a Brazilian whose athletic career ended after he was paralyzed from the waist down in a 2006 car crash—stood up from his wheelchair and knocked the ball forward while wearing a robotic exoskeleton. The contraption took its commands from a set of electrodes pasted to Pinto's scalp which detected and deciphered faint electrical signals from his brain. The demonstration showcases the pioneering work of Miguel Nicolelis, a Brazilian neuroscientist and brain-computer interface researcher.

The other technologies being unveiled at the World Cup are certainly more subtle, but they actually have the potential to influence the outcome of games. During the 11th minute of yesterday's inaugural game, fans in São Paulo cringed and winced as they watched the ball deflect off the left toe of Brazilian defender Marcelo Vieira and land in the back of his own team's net, marking the first time in World Cup history that Brazil has made a goal against itself. Even before the spectators could absorb what was happening, a new automated detection system, called GoalControl, was alerting the referees on the field that a successful goal had been made. Officials in the 2014 games are wearing smart watches that vibrate whenever a ball fully crosses the goal line. The system, which uses 14 high-speed cameras (seven pointed at each goal) to capture the ball's movement in 3-D, was shown to be effective in a trial at the 2013 Confederations Cup. GoalControl successfully detected all 68 goals made at that tournament. 

And then there is all the technology that fans bring with them to the stadiums. During the 2006 World Cup, the texting, posting, and tweeting of feverish fans generated 30 gigabytes of data traffic. And that was before Instagram existed. This year, analysts are expecting a cumulative total of 12.6 terabytes.

The IEEE Standards Association has broken down all of these World Cup technologies in this comprehensive infographic:

Goings On at the North Carolina Maker Faire

Maker Faire North Carolina has been maturing. When I visited the first Maker Fair NC in 2010, there were vendors, to be sure, but it was easy enough to find average weekend tinkerers—people not associated with any company or organized group—demonstrating their techno-handiwork. That was much less true of the fifth edition, which took place last weekend at the state fairgrounds in Raleigh.

There were still many interesting things to see and do; indeed, there was a lot more than at the first gathering four years ago. This year's event included lock-picking instruction, a learn-to-solder table, and a giant battlebot arena, to name some prominent attractions. I imagine this and other Maker Faires appeal to many more people now than when they first sprung up.

Still, I couldn’t help feeling a sense of loss. Sure, the gizmos were more numerous and more polished. But they were also more predictable, dominated by things that involved robots or 3-D printing. A group conducting high-altitude balloon launches was a welcome exception. The following video should give you a sense of what I mean:

There’s no question that this was an entertaining event for the whole family. (I brought my two kids, who much enjoyed it.) But somehow it didn’t really spark any wow moments or that "I-just-have-to-build-one-of-those" feeling. I suspect the reason has something to do with the way so much of the offbeat technical tinkering of five years ago has since become almost mainstream.

The Best ROI? A CS Degree from Carnegie Mellon

Engineering and computer science are tough. And tuition at top engineering schools can cost a pretty penny. For those trying to pick a program with good return-on-investment, a recent survey of schools that produce the top-earning engineering/CS graduates could come in handy.

Computer science grads from Carnegie Mellon University make the highest reported starting salaries, averaging $89,832, according to the survey by the higher-education unit of San Francisco-based online personal finance service NerdWallet. Second and third on the list are grads of the California Institute of Technology and Stanford University’s College of Engineering, with average starting salaries of $83,750 and $74,467 respectively.

NerdWallet looked at the top 100 national universities plus the top 30 liberal arts schools from US News & World Report’s list of top colleges. It averaged starting salaries for the classes of 2011, 2012 and 2013 for each school.

As this article on Forbes points out, the survey is by no means comprehensive, since many schools, including Harvard and Yale, don’t release salary data. But it’s one of the only reports on best-paying engineering schools.

Engineers are, of course, consistently big earners (some might say overpaid) in salary reports, with no shortage of jobs. Engineering and computer science graduates fill all but one spot on the list of top 10 paid majors for the class of 2014 in the latest Salary Survey report by the National Association of Colleges and Employers. This even though starting salaries for engineers rose only 0.3% between 2013 and 2014 as opposed to 3.7% for health science majors.

Bottom line: when it comes to earning well, you can’t go wrong by choosing engineering.

Sony Creates Curved CMOS Sensors That Mimic the Eye

The retinas of humans and other animals line the curved inner surface of the eye. Now, in a bit of biomimicry, Sony engineers report that they have created a set of curved CMOS image sensors using a "bending machine" of their own construction.

The result is a simpler lens system and higher sensitivity, Kazuichiro Itonaga, a device manager with Sony's R&D Platform in Atsugi-shi, Japan reported on Tuesday at the Symposium on VLSI Technology in Honolulu, Hawaii.

A curved CMOS sensor has a few advantages over a planar sensor, Itonaga said. Because of the geometry, it can be paired with a flatter lens and a larger aperture, which lets in more light. Photodiodes at the periphery of a sensor array will be bent toward the center, which means light rays will hit them straight on instead of obliquely. What's more, the strain induced on a CMOS sensor by bending it alters the band gap of the silicon devices in the sensor region, lowering the noise created by "dark current" — the current that flows through a pixel even when it is receiving no external light. 

All told, the curved systems were 1.4 times more sensitive at the center of the sensor and twice as sensitive at the edge, according to the Sony engineers.

Itonaga gave few details on the process the team used to create the curved CMOS chips. He said that a machine was used to bend the CMOS sensors and that they were backed with a ceramic to stabilize them after bending. It was also unclear how much the chips were curved, although Itonaga said that they did achieve the same level of curvature found in the human eye.

Two chips were reported. One, which measured some 43 millimeters along the diagonal, is a full-size chip for digital cameras. The other is a smaller chip, more suitable for mobile phones, which measured 11 mm along the diagonal and boasts smaller pixels. The team integrated the curved image sensor with a lensing system and showed an image that seemed to be quite good, although it wasn't displayed alongside an image taken with an equivalent flat sensor for comparison.

This isn't the first curved image sensor to be developed. In 2008, for example, John Rogers' group at the University of Illinois at Urbana-Champaign reported they'd made a curved photodetector array by bending an array of photodiode islands connected by compressible interconnect. But Sony's work might be a bit closer to mass manufacture. The team has made somewhere in the vicinity of 100 full-size sensors with their bending machine. "We are ready," Itonaga said.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More