Tech Talk iconTech Talk

The number of students graduating with engineering degrees—and training in the latest technologies—is falling far short of the pace that new openings in industry are being created.

Germany's Engineering Gap Increasing Fast With Rise of New Technologies

Between 2016 and 2026, Germany will need 100,000 more engineers in electrotechnology, electronics, and computer technology than will graduate from that nation's universities and technical colleges. The report, published by the German Association for Electrical, Electronic, and Information  Technologies (VDE), bases this estimate on employment figures obtained for the period beginning in 2005 and ending in 2013. During that time, the number of new engineering positions increased by an average of 10,500 per year, while unemployment for engineers remained low—less than 2.5 percent.

The study also reports that during this eight-year period, the number of engineers graduating in Germany would have only covered new openings caused by engineers leaving the profession, mainly from retirement. This indicates that the yearly increase in job openings in industry has been filled by an influx of engineers from Southern Europe and other countries. In 2013, 10.6 percent of the total of 381.200 electrical engineers working in Germany were foreign nationals. Extrapolating from those statistics, the authors of the report arrived at the figure of 100,000 new engineering positions that will have to be filled in the next 10 years. 

The reason for this gap is industry’s increased reliance on new technologies. According to VDE, almost 90 percent of the new engineering positions are now in the areas of digital technologies, mobility, the smart grid, and information technology. But mechanical engineering and machine manufacturing are also involved. An example is the automotive components manufacturer Bosch, which is actively seeking to fill thousands of positions in these areas.

This changeover to new technologies happens in competing countries as well, so Germany will have to look for ways to encourage  young people in Germany to study engineering rather than hope to attract them from elsewhere. Ansgar Hinz, chairman of the board of directors of VDE, in a press release (in German), blames what he calls a double gap:  “...a lack of both engineering graduates from [technical] colleges and universities.”

A world map with red dots shows all known locations of Netflix servers, with the majority concentrated in the United States and Europe and few in Africa or Asia.

Researchers Map Locations of 4,669 Servers in Netflix’s Content Delivery Network

When you open Netflix and hit “play,” your computer sends a request to the video-streaming service to locate the movie you’d like to watch. The company responds with the name and location of the specific server that your device must access in order for you to view the film.

For the first time, researchers have taken advantage of this naming system to map the location and total number of servers across Netflix’s entire content delivery network, providing a rare glimpse into the guts of the world’s largest video-streaming service.

A group from Queen Mary University of London (QMUL) traced server names to identify 4,669 Netflix servers in 243 locations around the world. The majority of those servers still reside in the United States and Europe at a time when the company is eager to develop its international audience. The United States also leads the world in Netflix traffic, based on the group’s analysis of volumes handled by each server. Roughly eight times as many movies are watched there as in Mexico, which places second in Netflix traffic volume. The United Kingdom, Canada, and Brazil round out the top five.

The QMUL group presented its research to Netflix representatives earlier this year in a private symposium.

“I think it's a very well-executed study,” says Peter Pietzuch, a specialist in large-scale distributed systems at Imperial College London who was not involved in the research. “Netflix would probably never be willing to share this level of detail about their infrastructure, because obviously it's commercially sensitive.”

In March, Netflix did publish a blog post outlining the overall structure of its content delivery network, but did not share the total number of servers or server counts for specific sites.

Last January, Netflix announced that it would expand its video-streaming service to 190 countries, and IHS Markit recently predicted that the number of international Netflix subscribers could be greater than U.S. subscribers in as few as two years. Still, about 72 percent of Netflix customers were based in the United States as of 2014.

Steve Uhlig, the networks expert at Queen Mary University of London who led the mapping project, says repeating the analysis over time could track shifts in the company’s server deployment and traffic volumes as its customer base changes.

“The evolution will reveal more about the actual strategy they are following,” he says. “That's a bit of the frustrating part about having only the snapshot. You can make guesses about why they do things in a specific market, but it's just guesses.”

Netflix launched streaming service in 2007 and began to design its own content delivery network in 2011. Companies that push out huge amounts of online content have two options when it comes to building their delivery networks. They may choose to place tons of servers at Internet exchange points (IXPs), which are like regional highway intersections for online traffic. Or, they can forge agreements to deploy servers within the private networks of Internet service providers such as Time Warner, Verizon, AT&T, and Comcast so that they’re even closer to customers.

Traditionally, content delivery services have chosen one strategy or the other. Akamai, for example, hosts a lot of content with Internet service providers, while Google, Amazon, and Limelight prefer to store it at IXPs. However, Uhlig’s group found that Netflix uses both strategies, and varies the structure of its network significantly from country to country.

Timm Böttger, a doctoral student at QMUL who is a member of the research team, says he was surprised to find two Netflix servers located within Verizon’s U.S. network. Verizon and other service providers have argued with Netflix over whether they would allow Netflix to directly connect servers to their networks for free. In 2014, Comcast required Netflix to pay for access to its own network.

Tellingly, the group did not find any Netflix servers in Comcast’s U.S. network. As for the mysterious Verizon servers? “We think it is quite likely that this is a trial to consider broader future deployment,” Böttger says. Netflix did not respond to a request for comment.

To outline Netflix’s content delivery network, Uhlig and his group began by playing films from the Netflix library and studying the structure of server names that were returned from their requests. The researchers also used the Hola browser extension to request films from 753 IP addresses in different parts of the world in order to find even more server names than would otherwise be accessible from their London lab.

“We first tried to behave like the regular users, and just started watching random movies and took a look at the network packages that were exchanged,” says Böttger.

Their search revealed that Netflix’s server names are written in a similar construction: a string of numbers and letters that include traditional airport codes such as lhr001 for London Heathrow to mark the server’s location and a “counter” such as c020 to indicate the number of servers at that location. A third element written as .isp or .ix shows whether the server is located within an Internet exchange point or with an Internet service provider.

Once they had figured out this naming structure, the group built a crawler that could search for domain names that shared the common address. The team supplied the crawler with a list of countries, airport codes, and Internet service providers compiled from publicly available information. After searching all possible combinations of those lists, the crawler returned 4,669 servers in 243 locations. (Though the study cites 233 locations, Böttger said in a follow-up email that 243 is the correct number.) 

To study traffic volumes, the researchers relied on a specific section of the IP header that keeps a running tally of data packets that a given server has handled. By issuing multiple requests to these servers and tracking how quickly the values rose, the team estimated how much traffic each server was processing at different times of the day. They tested the servers in 1-minute intervals over a period of 10 days.

Their results showed that the structure and volume of data requested from Netflix’s content delivery network varies widely from country to country. In the United States, Netflix is largely delivered through IXPs, which house 2,583 servers—far more than the 625 found at Internet service providers.

Meanwhile, there are no Netflix servers at IXPs in Canada or Mexico. Customers in those countries are served exclusively by servers within Internet service providers, as well as possibly through IXPs along the U.S. borders. South America also relies largely on servers embedded within ISP networks—with the exception of Brazil, which has Netflix servers stashed at several IXPs.

The U.K. has more Netflix servers than any other European country, and most of those servers are deployed within Internet service providers. All French customers get their films streamed through servers stationed at a single IXP called France-IX. Eastern Europe, meanwhile, has no Netflix servers because those countries were only just added to the company’s network in January.

And the entire continent of Africa has only eight Netflix servers, all of which are deployed at IXPs in Johannesburg, South Africa. That’s only a few more than the four Netflix servers the team found on the tiny Pacific Ocean island of Guam, which is home to the U.S.-operated Andersen Air Force Base.

“It's kind of striking to see those differences across countries,” Pietzuch says. “[Netflix’s] recent announced expansion isn't really that visible when you only look at the evolution of their CDN structure.”

Before the group’s analysis, Uhlig expected to see servers deployed mostly through Internet service providers as a way to ease the traffic burden for service providers and get as close as possible to Netflix’s 83 million customers. He was surprised to see how heavily the company relies on IXPs, despite the company’s insistence that 90 percent of its traffic is delivered through ISPs.

“If you really want to say, ‘I really want to be close to the end users,’ you need to deploy more, and we didn't see that,” he says. “I think the centralized approach is convenient because you have more control and you can scale things up or down according to what the market tells you.”

Uhlig didn’t expect to find Mexico and Brazil so high on the traffic list, even though Netflix has tried to expand its Spanish- and Portuguese-language offerings.

In March, the company said it delivers about 125 million total hours of viewing to customers per dayThe researchers learned that Netflix traffic seems to peak just before midnight local time, with a second peak for IXP servers occurring around 8 a.m., presumably as Netflix uploads new content to its servers. 

Before 5G takes the stage, there will be a lot of improvement in 4G wireless data service.

Finnish Carrier Sets New World Record for 4G Download Speeds

Sure, everyone’s excited about 5G, the highly anticipated next generation of wireless networks expected to deliver data so fast it will make your head spin. But the improvements in speed and capacity that 4G networks achieve today will be far more relevant to the average customer for at least the next three years.

That’s why it’s good news that Elisa, a Finnish carrier, announced what it says is a new world record for 4G download speeds. The company used the latest 4G technology from Huawei to achieve a top speed of 1.9 gigabits per second last week in a Helsinki lab. Sami Komulainen, vice president of mobile network services, says he hopes to use this technology to offer customers a data package with 1 Gb/s download speeds within “a few years.”

Finland is a fitting place for carriers to push the limits of 4G network speeds and capacity. Finns consume more data per capita than any other nation, with the average active user on Elisa’s network devouring 12 gigabytes of data per month. This compares with 2 GB per person in other developed economies. To put this into perspective, Komulainen says Finland, with an estimated 3 million smartphone users, consumes about the same amount of data as India’s 220 million smartphone users.

The standard mobile phone plan in Finland comes with unlimited data, and carriers differentiate their services based on speed. Elisa’s network currently maxes out at 450 megabits per second; it sells data packages that offer speeds up to 300 Mb/s.

And the rest of the world could soon behave a lot more like the Finns do. Worldwide, carriers anticipate ever more demand for 4G service, long before 5G is expected to roll out in the early 2020s. Cisco estimates that global mobile data traffic will rise eightfold from 2015 to 2020.

With 5G on the (somewhat distant) horizon, some carriers have begun to speak of building a new “4.5G” network as they move beyond the speeds and capacity that have long defined 4G service.

The Elisa test relied on a suite of wireless strategies and technologies including five-carrier aggregation (a technique that lets users blend results from five carriers for better service), 4x4 MIMO (which refers to the structure of the base station radio unit and antennas), and 256 QAM (which indicates how the amplitude of radio signals is modulated). It was the first time that this particular blend of strategies was used in combination with Huawei's latest LTE-Advanced Pro technology.

Though Elisa may have posted the most impressive speeds to date, plenty of other carriers are running similar tests. In February, Australia-based Optus achieved peak 4G download speeds of 1.23 Gb/s and cited a “theoretical maximum” of 1.43 Gb/s based on its network and the Huawei technology in use.

“I think all of [the carriers] are in the 1 Gb/s range; Elisa's beyond [the rest] slightly, but I think they're all in a similar ballpark,” says Janette Stewart, an analyst who specializes in wireless and spectrum at Analysys Mason in London. “As soon as you've got one operator achieving that, then immediately you'll have others following.”

The new speed test doesn’t mean Elisa customers should expect lightning-fast downloads to begin tomorrow. Maximum speeds achieved in a lab under ideal conditions are not generally repeatable in a real network. Elisa ran its test on a base station serving a single terminal, or user. The researchers used five frequency bands (800, 900, 1800, 2100, and 2600 megahertz) to transmit signals, but in their actual network, some of those bands are reserved for 2G and 3G service.

However, Stewart expects that eventually, customers should see a difference if Huawei’s new technology is deployed across the network. “Not all users will get the peak speed, but the average speed that everybody gets should push up as a result of having this technology,” she says.

Though his immediate focus remains on improving 4G service to data-hungry Finns, Elisa’s Komulainen can’t resist thinking about what the company’s latest progress means for the future. “I think we’re going step by step toward 5G,” he says.

A new crowd-density simulator is designed to help authorities predict when a crush of people could turn deadly.

Real-Time Crowd Simulator Could Help Prevent Deadly Stampedes

As fatal crowd accidents in places like Shanghai and Mecca have shown, improved means of controlling crowd behavior and predicting dangerous congestion spots would be a boon to authorities such as police forces that are tasked with managing major public events. To this end, Mitsubishi Electric and the University of Tokyo’s Research Center for Advanced Science and Technology (RCAST) have jointly developed what they claim is the world’s first real-time crowd-congestion estimation system.

Read More
photograph of Tianhe-2 supercomputer

A Reality Check for the World’s Largest Radio Telescope

The construction of the world’s largest radio telescope, the Square Kilometer Array, or SKA, will begin in 2017. When completed in 2023, the largest part of the project will consist of an array of thousands of telescope dishes in the Murchison region of Australia covering an area 200 kilometers wide. The array will produce radio images of the universe at higher radio frequencies with an unprecedented angular resolving power corresponding to that of a single dish 200 km wide. A smaller area, 70 km across, will be populated with about 250,000 antennas covering a lower frequency range. A third, mid-frequency antenna park will be located in South Africa, near Cape Town.

Signals from all these antennas will be collected and integrated in a data stream that can then be further processed for use by the scientific community. Software and computing power is one of the main challenges of the SKA project, says Andreas Wicenec, head of data intensive astronomy at the International Centre for Radio Astronomy Research (ICRAR) in Perth, Australia. The computing power that we need will correspond to what can be achieved with the now-fastest computers, he adds. Two dedicated computers, one based in Cape Town and one in Perth, each with a speed of 150 petaflops, will harness the data stream of the SKA.

Recently, a part of the software under development ran on the world’s second fastest supercomputer, the Tianhe-2, located in the National Supercomputer Center in Guangzhou, China. “For the time being, we do mostly deployment scaling capability tests, rather than real computing tests. The reason that we are doing this so early is that deployment will demand the highest processing power of the SKA computers,” says Wicenec.

Read More
Fingers type on a laptop keyboard.

You’re Being Tracked (and Tracked and Tracked) on the Web

The number of third parties sending information to and receiving data from popular websites each time you visit them has increased dramatically in the past 20 years, which means that visitors to those sites may be more closely watched by major corporations and advertisers than ever before, according to a new analysis of Web tracking.

A team from the University of Washington reviewed two decades of third-party requests by using Internet Archive’s Wayback Machine. They found a four-fold increase in the number of requests logged on the average website from 1996 to 2016, and say that companies may be using these requests to more frequently track the behavior of individual users. They presented their findings at the USENIX Security Conference in Austin, Texas, earlier this month.

The authors—Adam Lerner and Anna Kornfeld Simpson, who are both PhD candidates, along with collaborators Tadayoshi Kohno and Franziska Roesner—found that popular websites make an average of four third-party requests in 2016, up from less than one in 1996. However, those figures likely underestimate of the prevalence of such requests because of limitations of the data contained within the Wayback Machine. Roesner calls their findings “conservative.”  

For comparison, a study by Princeton computer science researcher Arvind Narayanan and colleagues that was released in January looked at one million websites and found that top websites host an average of 25 to 30 third parties. Chris Jay Hoofnagle, a privacy and law scholar at UC Berkeley, says his own research has found that 36 of the 100 most popular sites send more than 150 requests each, with one site logging more than 300. The definition of a tracker or a third-party request, and the methods used to identify them, may also vary between analyses.

“It’s not so much that I would invest a lot of confidence in the idea that there were X number of trackers on any given site,” Hoofnagle says of the University of Washington team’s results. “Rather, it’s the trend that’s important.”

Most third party requests are made through cookies, which are snippets of information that are stored in a user’s browser. Those snippets enable users to automatically log in or add items to a virtual shopping cart, but they can also be recognized by a third party as the user navigates to other sites.

For example, a national news site called might send a request to a local realtor to load an advertisement on its home page. Along with the ad, the realtor can send a cookie with a unique identifier for that user, and then read that cookie from the user’s browser when the user navigates to another site where the realtor also advertises.

In addition to following the evolution of third party requests, the team also revealed the dominance of players such as Google Analytics, which was present on nearly one-third of the sites analyzed in the University of Washington study. In the early 2000s, no third party appeared on more than 10 percent of sites. And back then, only about 5 percent of sites sent five or more third party requests. Today, nearly 40 percent do. But there’s good news, too: pop-up browser windows seem to have peaked in the mid-2000s.

Narayanan says he has noticed another trend in his own work: consolidation within the tracking industry, with only a few entities such as Facebook or Google’s DoubleClick advertising service appearing across a high percentage of sites. “Maybe the world we’re heading toward is that there’s a relatively small number of trackers that are present on a majority of sites, and then a long tail,” he says.

Many privacy experts consider Web tracking to be problematic, because trackers can monitor a user’s behavior as they move from site to site. Combined with publicly-available information from personal websites or social media profiles, this behavior can enable retailers or other entities create identity profiles without a user’s permission.

“Because we don’t know what companies are doing on the server side with that information, for any entity that your browser talks to that you didn’t specifically ask it to talk to, you should be asking, ‘What are they doing?’” Roesner says.

But while every Web tracker requires a third-party request, not every third-party request is a tracker. Sites that use Google Analytics (including IEEE Spectrum) make third-party requests to monitor how content is being used. Other news sites send requests to Facebook so the social media site can display its “Like” button next to articles and permit users to comment with their accounts. That means it’s hard to tell from this study whether tracking itself has increased, or if the number of third-party requests has simply gone up.

Modern ad blockers can prevent sites from installing cookies and have become popular with users in recent years. Perhaps due in part to this shift, the authors also found that the behaviors that third parties exhibit have become more sophisitcated and wider in scope. For example, a new tactic avoids the use of cookies by recording a users’ device fingerprints, or identifiable characteristics such as screen size of their smartphone, laptop, or tablet.

When they began their analysis, the University of Washington researchers were pleased to find that the Wayback Machine could be used to track cookies and device fingerprinting through its storage of the original JavaScript code, which allows them to determine which JavaScript APIs are called on each website. Therefore, a user who is perusing the archived version of a site in the Wayback Machine winds up making all the same requests that the site was programmed to make at the time.

The researchers embedded their tool, which they call TrackingExcavator, in a Chrome browser extension and configured it to allow pop-ups and cookies. They instructed the tool to inspect the 500 most popular sites, as ranked by Amazon’s Web analytics subsidiary Alexa, for each year of the analysis. As it browsed the sites, the system recorded third-party requests and cookies, and the use of particular JavaScript APIs known to assist with device fingerprinting. The tool visited each site twice, once to “prime” the site and again to analyze whether requests were sent.

Until now, the team says academic researchers hadn’t found a way to study Web tracking as it existed before 2005. Hoofnagle of UC Berkeley says that using the Wayback Machine was a clever approach and could inspire other scholars to mine archival sites for other reasons. “I wish I had thought of this,” he says. “I’m totally kicking myself.”

Still, there are plenty of holes in the archive that limit its usefulness. For example, some sites prohibit automated bots such as those used by the Wayback Machine from perusing them. 

Ixion docked with ISS

NASA Funds Plan to Turn Used Rocket Fuel Tanks Into Space Habitats

NASA is very good about being on the cutting edge of space exploration, but it's less good about making non-cutting edge space exploration efficient and cost effective. The agency is acutely aware of this, which is why it's been trying to get commercial carriers to handle deliveries of (now) supplies and (soon) astronauts to the ISS.

The next step is for private companies to take over space station construction for (soon) Earth orbit and (eventually) deep space. To that end, NASA has selected six partner companies to develop full-sized ground prototypes and concepts for deep space habitats, with the eventual goal of deploying habitats near the moon as a stepping stone to Mars.

Five of the partners, including Bigelow Aerospace, Boeing, Lockheed Martin, Orbital ATK, and Sierra Nevada will be designing habitats that are built on Earth and launched into space on rockets. It makes sense to do this, because it's how habitats have always been sent into space. The sixth partner, NanoRacks, is teaming up with Space Systems Loral and United Launch Alliance to try something completely different: taking empty fuel tanks from the upper stages of rockets and turning them into space habitats on-orbit.

Read More
A room-temperature fluorescent protein polariton laser in action

Glowing Protein Warms Up Low-Power Laser

Mother Nature probably wasn’t thinking about lasers when she invented the jellyfish, but it turns out the sea creature evolved a substance well suited to letting a new type of laser work at room temperature.

Read More
A woman holds a smartphone and checks apps on its homescreen.

Are Your Apps Sluggish? Blame Summer

During late August here in the United States, it can start to feel like everything is moving a little bit slower. In the case of your apps, that may actually be the case.

Earlier this year, a San Francisco company called Apteligent released a report based on internal data that suggests app performance slows by 15 percent in the summer. The report identifies humidity as the culprit, though the company can’t say for sure based on their data why the extra delay occurs.

However, it’s a reasonable guess that moisture in the air is the guilty party in light of research showing that radio signals attenuate in humidity as well as in rain, sleet, and snow. As radio waves travel through humid air, water molecules absorb part of their energy and scatter another portion, weakening the signal or causing data packets to be lost altogether, says Milda Tamošiūnaitė of the Center for Physical Sciences and Technology in Vilnius, Lithuania.

This effect is particularly bad at frequencies above 1 gigahertz, which are used for LTE service, because those shorter wavelengths repeat more frequently and so have more opportunities to encounter obstacles as they travel. Those obstacles will also be larger relative to the size of the wave than obstacles encountered by waves at lower frequencies.

Phone calls and data connections are both attenuated by rain, humidity, and other forms of airborne water—though phone calls are sometimes handled at lower frequencies, so they may be slightly less impacted than Web browsing. Previous research has often focused on the effect of rain on radio signals, but the specific role of humidity has been studied less.

The informal study by Apteligent hints at humidity’s potential impact on app performance, though can’t be considered definitive. The company monitors tens of thousands of apps for clients including Hilton, Groupon, Netflix, and PokemonGo. Its clients embed a bit of special code into their apps, and the code allows Apteligent to track what users are doing, how much data they are sending and receiving, whether they experience any delays, and what might be causing those delays.

“If it's a smartphone in the U.S. that has apps, the odds are very high that we're embedded in one of those,” says Andrew Levy, Apteligent’s co-founder and chief strategy officer.  

To examine the possible role of humidity in app performance, the company compared the average latency across its entire U.S. network of smartphone apps during the summer of 2015 with its performance the following winter. They found that service was about 15 percent slower in the summer than in the winter. Their theory is that humidity caused the bulk of this impact.

Ultimately, the average delay only worsened by about 60 milliseconds—a period of time that customers aren’t likely to notice. For comparison, Tamošiūnaitė says light rain could attenuate a 2 GHz signal by 15 percent at a distance of about 3000 kilometers, or at 128 kilometers during heavy rain (Note: this example assumes that rain is the only factor causing signal degradation, which is never the case in real life).  

So what does all of this mean for developers? Paulo Michels, VP of engineering for app development company ArcTouch, says it won’t change his approach very much. He and his team of 60 software engineers, who have developed roughly 300 apps, aren’t focused on weather more than any other factor when building a new app. They already use common strategies such as compressing JPEGs, pre-processing videos to allow them to stream at multiple potential qualities based on a user’s network, and caching content on phones in order to avoid delays.

“The network, of course, plays a big effect on overall app performance, but as mobile developers, we're used to considering the network as something unreliable and unpredictable,” he says.

Eric Richardson, senior software engineer for WillowTree, who has worked on more than 35 Android apps, says 60 milliseconds is no more than “the blink of an eye” and designing to account for peculiar weather conditions is not a major priority beyond the measures that developers already take for poor network connections.

But he also says the Apteligent report might mean that developers should start to make an effort to test their apps in humid conditions as well as on dry days. Right now, his company relies primarily on simulated 3G and 4G networks running on Wi-Fi to evaluate their apps, as well as some beta testing in the real world.

“Up until now, I don't think weather has ever been on our minds,” he says. “But now that it is, I guess it kind of brings in the perspective to do more realistic testing as opposed to just sitting in the office connected to Wi-Fi.”

Image: Craig Mayhew and Robert Simmon/GSFC/NASA

Fighting Poverty With Satellite Images and Machine-Learning Wizardry

Governments and NGOs need economic data to decide how best to aid the poor. But reliable, up-to-date data on poverty levels is hard to come by in the world’s poorest countries.

Scientists have now devised an inexpensive technique that combines satellite images and machine learning to accurately predict poverty levels at village level. Such a fine-grained gauge of poverty could help aid programs target those with the greatest needs. It could also be a valuable tool for researchers and policymakers to gather national statistics and set development goals.

Governments typically conduct surveys of income and consumption to measure poverty levels. These surveys cost hundreds of millions of dollars and are impossible to conduct in areas of conflict. World Bank data show that between 2000 and 2010, 39 out of 59 African countries conducted fewer than two surveys that were extensive enough to measure poverty.

Researchers have recently tried to estimate poverty levels by analyzing mobile phone usage data and satellite photos showing nighttime lighting. But mobile phone data are typically not publicly available. Nighttime lights, meanwhile, indicate wealthier regions, but they cannot differentiate among economic levels in the most impoverished regions. “In the poorest areas in Africa, the ones we care the most about, it’s almost uniformly dark at night,” says Neal Jean, an electrical engineering and computer science Ph.D. student at Stanford University.

Jean, earth system science professor Marshall Burke, and their colleagues came up with a clever machine-learning method that combines nighttime light intensity data with daytime satellite imagery. The technique, reported in the journal Science, is general and could be applied to any developing country, Jean says. 

In machine learning, a computer model is fed labeled data sets—say, thousands of images labeled “dog” or “cat.” Much like humans learn by inference after seeing enough examples, the model analyzes certain features in the images and figures out how to classify an animal in a picture as a dog or cat.

The researchers trained their machine-learning algorithm with millions of daytime satellite images, each labeled with a number that corresponded to how bright the area was at night. Daytime images, which contain features that indicate livelihoods, such as paved roads, metal roofs, and farmland, can help distinguish poor regions from ultrapoor ones. “The model looks for visual cues and automatically learns to find features in daytime imagery that correspond to nighttime light values,” Jean says.

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More