Tech Talk iconTech Talk

Model Predicts Whether NFL Teams Will Run or Pass

National Football League (NFL) playbooks are the size of telephone books. They’re filled with dozens and dozens of plays, each designed so that a team can play to its strengths while taking advantage of its opponents’ weaknesses. Despite the endless variations, they all basically boil down to two options for the offense: pass or run. No matter how intricately designed an offensive play is, if the defense can sniff out whether the ball will be tossed down field or toted along the ground, it gains a tremendous advantage. (Yes, we know that teams punt and kick field goals and extra points after touchdowns. But we’re not talking about that right now.) 

Earlier this week, a pair of statisticians from North Carolina State University showed off a model they built that predicts whether a specific team will call a passing or running play with a high degree of accuracy. They presented the model  in Seattle at JSM 2015, a joint conference of statistical and mathematical societies.

William Burton, an undergraduate who is majoring in industrial engineering and minoring in statistics, and Michael Dickey, who graduated in May with a degree in statistics, used a listing of actual NFL offensive plays from the 2000 through 2014 seasons that had been compiled by a company called Armchair Analysis to figure out the ratio of passes to runs. They showed empirically what fans already understood anecdotally: the aerial attack is being utilized ever more frequently. Pass plays were called on 56.7 percent of the time in 2014, compared with 54.4 percent in 2000.

But what makes a team decide whether to run or throw? Burton and Dickey looked at a host of factors that affect a team's play selection. Among these are: the distance to the first-down marker, whether it’s first, second, third or fourth down, how much time is left on the game clock, the team’s score in relation to its opponent’s, and field position. For example, there’s a high probability that the coach will opt for a passing play if the other team is leading by three points, there’s a minute left in the fourth quarter, the offense is facing third down at its own 30-yard line, and needs to advance 7 yards to pick up a fresh set of downs. On the other hand, a team that’s leading by 7 points, facing the same down and distance at the same point in the game, might very likely run the ball (to avoid an interception and to take time off the clock so the other team can’t mount a score-tying drive before time runs out).

For their system, Burton and Dickey developed logistic regression models—methods used to, for example, predict if someone will default on a mortgage—and random forest models—a machine learning method. But they quickly realized that teams’ strategies differ significantly in each of a game’s quarters. To account for that, they produced six separate logistic regression models: one each for the first, second, and third quarters, plus one for the fourth quarter if the offensive team is winning, another if it is losing, and a third for when the score is tied. They tested their models on 20 randomly selected games. Overall, the models accurately predicted pass or run on 75 percent of downs. The models’ best performance was related to a 2014 game between the Jacksonville Jaguars and Dallas Cowboys. Their predictions proved correct on 109 out of 119 offensive plays—a 91.6-percent accuracy rate.

Burton and Dickey say that anyone, including NFL coaches and fans rooting for their teams at home, can use the tool to make educated guesses about what will happen each time the ball is snapped.

DARPA Funds Stage 2 of XS-1 Spaceplane Design Competition

The Space Shuttle was originally intended to make getting to space easy, inexpensive, and routine, with an initial goal of a launch nearly every week. It didn't quite play out that way, and we’re now back to tossing things into orbit on top of massively expensive rockets that are good for only a single one-way trip. It’s a system that works (most of the time), but it's not a system that’s efficient.

Just because the Space Shuttle didn’t magically open orbit up to everyone doesn’t mean that getting to space through a reusable platform is a bad idea. In commercial spaceflight circles, the thinking is that reusability is the best way to help drive down costs. DARPA, the U.S. Defense Department’s blue-sky research arm, wants a piece of the action; but in typical DARPA fashion, the agency is looking for something even more futuristic. It has just awarded nearly $20 million to three aerospace companies to continue design work on a reusable spaceplane that will be able to launch a satellite into space every single day.

Read More

See Through Walls by the Glow of Your Wi-Fi

It used to be that a bad guy besieged by police could just shoot out the lights and hide in the dark. As if it weren’t enough that today’s cornered malefactors have to worry about night vision goggles, tomorrow’s thugs may also have to worry about the soft radio glow of wireless routers and mobile communications towers.

Researchers at University College London (UCL) have devised a system for detecting the Doppler shifts of ubiquitous Wi-Fi and mobile telephone signals to “see” people moving, even behind masonry walls 25 centimeters thick. The method, which could be useful in situations from hostage-takings to traffic control, won the Engineering Impact Award in the RF and Communications category at this National Instrument’s NI Week 2015 meeting (which convened in Austin, Tex., 3-9 August).

Other researchers—notably Dina Katabi and Fadel Adib of MIT—have built through-wall radars in the household communication bands, but these are active radars that transmit as well as receive. The UCL technique uses only passive radiation—from Wi-Fi routers (using emissions in any of the IEEE 802.11 b, g, n, ac), ambient GSM and LTE mobile signals, and other sources—so there is nothing to betray the surveillance. The system calculates the positions of hidden target by comparing two signals: a reference channel, receiving the baseline signal from the Wi-Fi access point or other RF source, and a surveillance channel, which picks up Doppler-shifted waves reflecting from the moving subject.

Tan and company built their “high Doppler resolution passive Wi-Fi radar” on two multi-frequency, software-defined, FPGA-based transceivers (National Instruments’ USRP, or Universal Software Radio Peripheral. The system compares the reference and surveillance signals, interprets the very small frequency shifts, and reveals the hidden subject’s location and motion.

By tweaking the processing parameters—increasing signal-integration time and lowering sensitivity thresholds—the engineers could coax the passive radar into “seeing” quite subtle movements, even hand gestures. At this stage, the device doesn’t produce anything resembling a photograph of the subject. It delivers a radar-style scatter plot, a flare of color that says, “Here he is!” along with a variety of signal data.  The system is described in more detail in a paper that Tan and UCL colleagues Qingchao Chen, Karl Woodbridge, and Kevin Chetty presented at the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), held 19-24 April in South Brisbane, Australia.

Samsung, Nokia Show 5G Tech at NI Week

Two on-stage demonstrations at National Instruments’ NI Week, which was held in Austin, Texas, from 3–6 August, showed off working infrastructure prototypes for the 5G networks that will be the backbone of Big Data applications and the Internet of Things (the conference’s themes). 5G’s goals include hundred-fold performance improvements over today’s 4G systems, allowing the transmission of tens of megabits per second to thousands of mobile users, and offering peak rates well north of a gigabit per second within an office.

To open Day 1, researchers from Samsung showed off full-dimensional multiple-input, multiple-output (FD-MIMO), one of several parallel techniques for squeezing more information through the airwaves. FD-MIMO sculpts the signals fed to an array of transmission antennas to form virtual beams that can lock in on multiple receivers in three dimensions. This cuts interference from overlapping simultaneous transmissions to other receivers and increases the power of the signal that reaches the target.

The Samsung demo base station transmitted simultaneously at different data rates to four separate receivers. For demonstration purposes, the base station transmitted at 3.5 gigahertz (~86 millimeter wavelength), though production transmitters will likely use carriers in the tens-of-gigahertz range. (The receivers were configured on NI USRP RIOs, or Universal Software Radio Peripheral Reconfigurable IO, a transceiver that can be programmed to reproduce the characteristics of a variety of RF devices over a range of frequencies.)

Initially, in conventional broadcast mode, interference between the four streams garbled the signals and kept any of the data from getting through. Switching to FD-MIMO, however, modulated the signals produced by each of the base station’s 32 antennas to allow beamforming in three dimensions. The transmitter homed in on each of the four receivers to push through separate, clear signals. Throughputs measured at the receivers jumped from essentially zero to as much as 28 megabits per second. The Samsung engineers cautioned, though, that the demonstration was intended to show how much FD-MIMO can improve signal quality, not to showcase a fully blown 5G concept.

(For a quick look under the hood of Samsung’s FD-MIMO, see the Xcell blog of Xilinx’s Steve Leibson.)

On Day 2, engineers from Nokia demonstrated another cornerstone of 5G: higher frequency. In what Nokia’s head of North American radio research Amitabha Ghosh called the first public demonstration of their 5G base station (it debuted at an invitation-only event at the Brooklyn 5G Summit last April), Ghosh and his colleagues sent two MIMO streams across the stage. The 73-GHz (~4 mm) signals used 2 GHz of bandwidth to achieve a combined throughput greater than 10 gigabits per second with a latency under 1 millisecond. (To see the video, go to the NI Week web site, click the “Wednesday” tab, and select “Nokia.”)

Nokia’s proof-of-concept system is the latest iteration of a 1 GHz demonstration displayed last year that used fully programmable FPGA components. Ghosh also reported that 73-GHz signals had been successfully transmitted outdoors to fast-moving receivers and over distances of more than 200 meters.

The results are significant. Some parts of the millimeter-wave spectrum are open in part because they span the resonant frequencies of atmospheric water and oxygen, which produce spikes in atmospheric absorption. While there is a local attenuation trough around 73 GHz (between flanking spikes at 60 and 120 GHz), atmospheric losses are still about 20 times higher than they would be for a 1-GHz carrier. This circumstance had bred widespread doubt that useful signals could be carried at all in that part of the spectrum…doubts that these results have helped to quiet.

For more background, see the 5G vision white papers from Samsung and Nokia

A Computer That Can Sniff Out Septic Shock

Dr. David Hagar treats dozens of patients each day at the intensive care unit at John Hopkins Hospital in Maryland. One of his patients was almost perfectly healthy except for having low blood pressure. Within four hours, the patient died of septic shock.

A group of computer scientists at John Hopkins University partnered with Hagar, and created an algorithm that can predict septic shock and give clinicians more time to treat someone at risk.

Read More

FCC Sets Rules for Copper Phase Out

The U.S. Federal Communications Commission set new ground rules for carriers seeking to replace their old copper telephone networks. Approved by a 3-2 vote at an open meeting yesterday, the rules require carriers to notify customers in advance and to seek FCC approval before reducing services. 

Home landline service has dropped dramatically with the spread of mobile phones. In 2000, almost every U.S. household had a landline phone. Since then, many have dropped landline service, and nearly 50 million of the remaining lines have switched to Voice over IP, which sends voice calls in the user's broadband data stream rather than over traditional telephony’s copper wire pairs. FCC chairman Tom Wheeler and others have been pushing to shift telephone traffic to fiber optics and the Internet.

Critics have charged that phone companies are allowing their old copper networks to decay to force customers to shift to fiber service. But some 37 million households—many of them headed by elderly people—remain on legacy copper, commissioner Mignon Clyburn noted at the hearing. Other holdouts live in rural areas that lack cellular and broadband service. Some prefer copper connections because they are independent of local power lines, and offer better 911 emergency service.

The FCC ruling requires that carriers notify retail customers at least three months before shutting down a copper network, and provide six-months notice to interconnecting carriers using the old lines. (Clyburn complained that that's much less time than the FCC gave before shutting down analog broadcast television, but voted for the measure anyway.) Carriers also must seek FCC approval if the telephone changeover would "discontinue, reduce or impair" service. Details remain to be worked out, but key issues are voice quality and support for 911 emergency calls, alarms, and medical monitors, sw well as assistive technology for the disabled.  

Two dissenting commissioners complained that the new rules would extend regulations and slow adoption of new technology. But Wheeler said that changing technology should not be "an opportunity to erase the historical reality of the responsibility of network [operators] to the users of those services." 

In a separate vote, all five commissioners agreed to require carriers to offer customers backup power supplies that maintain their phone service during prolonged power outages. Traditional copper phone lines are independent of local power, and have a reputation of being more reliable than power grids. But that hasn't stopped landline users from buying cordless phones that go down with the grid. 

Simple Device Could Convert DC Electric Field To Terahertz Radiation

Terahertz radiation is touted to open up many wondrous possibilities. T-ray technology could allow security officials to detect concealed weapons from a distance, provide accurate medical imaging, and allow high-speed wireless data communication.

One of the challenges in making the technology viable, though, has been developing a compact, efficient, and powerful terahertz source. The sources used today are bulky and costly. Some, such as quantum cascade lasers, require cryogenic temperatures.

A team of physicists now proposes a way to convert DC electric fields into terahertz radiation. They have come up with a seemingly simple nanoscale device—it relies on complex physics, mind you—that consists of a pair of two-dimensional material layers placed on top of a thicker conductor. When a DC electric current is passed through the conductor or the 2-D layer, the device should spontaneously emit terahertz radiation, the researchers say. They report the design this week in the Journal of Applied Physics.

Compared to most other THz sources, which only emit one frequency, the device could be tuned to emit different frequencies in the THz range, says Danhong Huang, a principal research physicist at the Air Force Research Lab in New Mexico. And while this is only a proposed design right now, Huang says that it should, in theory, be possible to make a THz emitter that is several millimeters to a few centimeters in size and emits milliwatts of power.

The 2-D layers could be sheets of any 2-D material like graphene or the more newly discovered cousins silicene or germanene. Graphene should be ideal because of its high conductivity, Huang says. The conductor, meanwhile, would be a semiconductor such as silicon or gallium arsenide that is doped to make it more conducting. The higher the doping, and hence the conductivity of the conductor, the higher the frequency of the output radiation. Using a metal conductor, for instance, would give high-frequency infrared radiation.

The device’s underlying mechanism is surface plasmon resonance: the collective oscillations of conducting electrons. The DC field causes plasmon resonance at the thick conductor’s surface and at the interface between the two 2-D layers. The two plasmons couple together and cause an instability in the oscillations, which induces the emission of THz radiation. Terahertz waves range in frequency from 300GHz–3 THz, corresponding to wavelengths between 1 mm to 0.1 mm.

By adjusting parameters such as the density of conduction electrons in the material or the strength of the DC electric field, it should be possible to tune the frequency of the resulting terahertz radiation. The device should have a very wide frequency tuning range from the higher microwave—microwave radiation ranges from 300MHz–300GHz—up into the lower THz, Huang says. Making a source that emits higher THz frequencies is challenging because it requires very high voltages that can cause the material to break down.

The group is now working with experimental researchers to design a prototype THz emitter. The challenge will be to find or develop the best materials and the optimal combination of 2-D layers and a compatible conductor substrate on which the 2D layer can be grown or deposited, Huang says. 

NASA Commissions Ultra High Temp Chips for Venus Landsailing Rover

If you’re going to absolutely insist on exploring the surface of Venus, there are two enormous problems that need to be dealt with. Problem number one is the enormous pressure, and problem number two is the enormous heat. At 90 atmospheres of pressure and just under 500 degrees Celsius at the surface, very little is going to survive down there for long. The best we’ve managed so far is about two hours in the case of Russia’s Venera 14.

For a Venus lander mission, active cooling of most of the electronics would be necessary, but it would also need sensors, actuators, and microcontrollers that can stand up to Venus’ surface conditions. Trying to keep this stuff from immediate “puddleificaion” isn’t easy, but NASA has just thrown a quarter of a million dollars at a University of Arkansas spinoff to develop Venus-resistant chips for a weird little rover.

Read More

Google Asks France Not to Require Global Right To Be Forgotten

Google has asked France’s data protection agency, CNIL, to retract an order to apply French right-to-be-forgotten rulings to all Google search results. Since a European court ruling last spring, Google has handled right-to-be-forgotten requests only in country-specific versions of it’s search results (see IEEE Spectrum’s story, “Google’s Year of Forgetting”). In a blog post last week, Google’s Global Privacy Counsel, Peter Fleischer, wrote that the company’s representatives had asked CNIL “to withdraw” the June order.

European Union residents unhappy with search results for their name can ask search engine providers to remove links from the results by making the case that the links infringe on their privacy and the information is not in the public interest. A web slip-up by Google revealed last month that 95 percent of the requests so far have been by private citizens, not politicians and criminals, The Guardian reports. If the provider doesn’t grant such a request (almost 60 percent of the time for Google, which handles over nine in ten web searches in Europe), individuals can appeal to their country’s data protection authority for a definitive decision.

Yet last year’s court ruling only confirmed that national data protection agencies have the authority to rule in such cases. It did not specify the scope of such decisions. A comment in a February 2015 report by Google’s privacy advisory council hinted at the present conflict. Council member and German federal justice minister Sabine Leutheusser-Schnarrenberger wrote: “Since EU residents are able to research globally, the EU is authorized to decide that the search engine has to delete all the links globally.”

That, Fleischmann wrote last week, could set a troubling precedent. He wrote, “there are innumerable examples around the world where content that is declared illegal under the laws of one country, would be deemed legal in others: Thailand criminalizes some speech that is critical of its King, Turkey criminalizes some speech that is critical of Ataturk, and Russia outlaws some speech that is deemed to be ‘gay propaganda.’ ”

A CNIL representative said it would make a decision on Google’s request in two months, reports the BBC.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More