Tech Talk iconTech Talk

5 Things You Missed: Data Storage Obsolescence, X-Rays Map Out Chips, and More

1. The Lost Picture Show: Hollywood Archivists Can’t Outpace Obsolescence

These days, the major studios and film archives largely rely on a magnetic tape storage technology known as LTO, or linear tape-open, to preserve motion pictures. The good news: Because an older generation of LTO becomes obsolete when a new generation is introduced, film archivists with perhaps tens of thousands of LTO tapes on hand must invest millions of dollars in the latest format of tapes and drives every few years, and then migrate all the data on their older tapes—or risk losing access to the information altogether.

 

2. 3D X-ray Tech for Easy Reverse Engineering of ICs

A team of researchers based in Switzerland is on the way to laying bare much of the secret technology inside commercial processors. They pointed a beam of X-rays at a piece of an Intel processor and were able to reconstruct the chip’s warren of transistors and wiring in three dimensions. In the future, the team says, this imaging technique could be extended to create high-resolution, large-scale images of the interiors of chips.

 

3. DARPA to Use Electrical Stimulation to Enhance Military Training

The U.S. Department of Defense (DoD) wants to shorten the time it takes to train people in skills including speaking foreign languages, analyzing surveillance images, and marksmanship. It plans to use electrical stimulation to enhance the brain’s ability to learn. The Defense Department’s research arm, the Defense Advanced Research Projects Agency, or DARPA, wants to see a 30 percent improvement in learning rates by the end of the four-year program. 

 

4. How to Build Your Own Amazon Echo

Amazon has released programming interfaces for Alexa, the company’s intelligent personal assistant, and uploaded free source code and tutorials to Github. Now, anyone can use these to make their own homebrew version of Echo, the smart speaker that was the hot holiday gift of 2016.

 

5. Robotic Construction Platform Creates Large Buildings on Demand

The MIT Media Lab’s paper introduces the Digital Construction Platform (DCP), which is “an automated construction system capable of customized on-site fabrication of architectural-scale structures.” In other words, it’s a robot arm that uses additive construction techniques to build large structures safely, quickly, and even (in some cases) renewably.

Will the reality of 5G live up to the hype?

5G Progress, Realities Set in at Brooklyn 5G Summit

5G technologies are early in their development, and the business cases for them are a bit fuzzy, but wireless researchers and executives still had plenty to celebrate this week at the annual Brooklyn 5G Summit. They’ve made steady progress on defining future 5G networks, and have sped up the schedule for the first phase of standards-based 5G deployments.

Now, the world is just three years away (or two, depending on who you ask) from its first 5G commercial service. Amid the jubilance, reality is also starting to set in.

While attendees can agree that 5G networks will incorporate many new technologies—including millimeter waves, massive MIMO, small cells, and beamforming—no one knows how all of it will work together, or what customers will do with the resulting flood of data. The video below provides a primer on these technologies, and a hint of what we can expect.

Read More
Illustration: iStockphoto

Four Ways to Tackle H-1B Visa Reform

Update 19 April 2017: Yesterday, U.S. President Donald Trump signed an executive order instructing government agencies to suggest reforms to the H-1B Visa program. Analysts say that real reform will require Congressional action.  In February, IEEE Spectrum interviewed experts about what Congress could do. The original article follows:

U.S. tech companies love the H-1B visa program. The temporary visa is meant to allow them to bring high-skill foreign workers to fill jobs for which there aren’t enough skilled American workers.

But the program isn’t working. Originally intended to bring the best global talent to fill U.S. labor shortages, it has become a pipeline for a few big companies to hire cheap labor.

Giants like Amazon, Apple, Google, Intel, and Microsoft were all among the top 20 H-1B employers in 2014, according to Ron Hira, professor of political science at Howard University who has testified before Congress on high-skill immigration. The other fifteen—which include IBM but also consulting firms such as Tata Consultancy, Wipro, and Infosys—used the visa program mainly for outsourcing jobs.

Typically, U.S. companies like Disney, FedEx, and Cisco will contract with consulting firms. American workers end up training their foreign counterparts, only to have the U.S. firm replace the American trainers with the H-1B visa holding trainees—who’ll work for below-market wages.

Problems with this setup abound. First, talk of a tech labor shortage in the U.S. might be overblown. Then there’s the issue of quality: More than half of the H-1Bs at a vast majority of the top H-1B employers have bachelors degrees, but not advanced degrees. Hira argues that in many cases such as Disney and Northeast Utilities, the jettisoned American workers were obviously more skilled and knowledgeable than the people who filled those positions, considering the fact that they trained their H-1B replacements.

Plus, the H-1B is a guest-worker program where the employer holds the visa and isn’t required to sponsor the workers for legal permanent residency in the United States. So if the worker loses the job, he or she is legally bound to return to their country of origin. This gives the employer tremendous leverage, and can lead to abuse.

“It’s a lose-lose right now for the country and H-1B workers,” says Vivek Wadhwa, distinguished fellow and professor at Carnegie Mellon University Engineering at Silicon Valley.

Read More
The Holoplot audio system, a large array of black speakers, looks like a large black rectangle with hundreds of depressions of different sizes within. It is displayed at CeBIT, an annual trade show in Hanover, Germany.

Berlin Startup Holoplot Tests Steerable Sound in German Train Stations

A Berlin startup named Holoplot has built a premium audio system that it says can send one song or announcement to one corner of a room, and an entirely different message or tune to another area of the same room—without any interference between the two.

Holoplot is testing its technology in major train stations throughout Germany, where it says the system can send up to 16 message to separate gates at once, all at the same frequencies. It ran its first pilot at Frankfurt Hauptbahnhof, Germany’s largest train station, in December.

Read More
robot strikes a thinker pose

AI Learns Gender and Racial Biases From Language

Artificial intelligence does not automatically rise above human biases regarding gender and race. On the contrary, machine learning algorithms that represent the cutting edge of AI in many online services and apps may readily mimic the biases encoded in their training datasets. A new study has shown how AI learning from existing English language texts will exhibit the same human biases found in those texts.

The results have huge implications given machine learning AI's popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests. In the new study, computer scientists replicated many of those biases while training an off-the-shelf machine learning AI on a "Common Crawl" body of text—2.2 million different words—collected from the Internet.

Read More
Person watching a movie on a laptop

We Know What You're Watching (Even If It's Encrypted)

I stand firm in the opinion that it’s my basic, human right to binge-watch six hours of trashy detective shows on a Friday night with a silent phone in my lap and a glass of wine in my hand. I would also argue it’s my right to do so shamefully and in private, divulging the secret of my wasted weekends to no one but Netflix.

Netflix, it seems would agree with me. The company has been protecting video streams with HTTPS encryption since the summer of 2016. But new research indicates that this strategy is not sufficient to keep third party service providers and motivated attackers from getting a peek at what I’m watching.

Two recent papers, one from West Point Academy, and one by a collection of authors at Tel Aviv University and Cornell Tech, lay out methods for identifying videos by performing straightforward traffic analysis on encrypted data streams. One approach opens the door for snooping by any party that has direct access to the network on which a user is watching videos, such as an ISP or a VPN provider. The other could be used by any attacker who is able to deliver malicious Javascript code to the user’s browser. But both inspect the size of data bursts being transferred across the user’s network in order to fingerprint individual videos and compare them to a database of known, previously characterized content.

Read More
Google headquarters logo on Mountain view California glass office building.

Open-Source Clues to Google's Mysterious Fuchsia OS

This is a guest post. The views expressed in this article are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

It’s not often that one of the world’s leading software companies decides to develop a major new operating system. Yet in February 2016, Google began publishing code for a mysterious new platform, known as Fuchsia.

Google has officially said very little about Fuchsia, and the company did not respond to my request for comment. But since it’s being developed as an open-source project, its source code is entirely in the open for anyone to view. Indeed, anyone can download Fuchsia right now and try to run it.

Many people wrote about Fuchsia when it was first spotted last year. They raised the obvious questions of whether it meant that Google would be moving away from Linux within Android.

Since then, I have been periodically looking at the source code for signs of the company’s plans for its new operating system. According to Google, Fuchsia is designed to scale from small Internet of Things devices to modern smartphones and PCs.

Google, of course, already has two consumer operating systems. Within the tech industry, there is a well-known conflict between Google’s Android and Chrome OS. Android, which is primarily used in smartphones and tablets, is the most popular operating system in the world by device shipments and Internet usage and has a thriving native app ecosystem. Meanwhile, Chrome OS, which was designed for PCs, is much more secure than Android and provides a simplified computing environment that’s well suited for the education market.

While Google executives have denied that the two platforms would ever merge, there has been much internal debate over the years about how best to unify Google’s software efforts. Meanwhile, many consumers want Android as a PC platform, due to its greater capabilities and ample software offerings compared with those of Chrome OS.

In my eyes, Fuchsia is Google’s attempt to build a new operating system that advances the state of the art for consumer platforms and corrects many of the long-standing shortcomings of Android. The engineering goals of the project appear to include a more secure design, better performance, enabling timely updates, and a friendlier and more flexible developer API (application programming interface).

Read More
Google's Tensor Processing Unit board

Google Details Tensor Chip Powers

In January’s special Top Tech 2017 issue, I wrote about various efforts to produce custom hardware tailored for performing deep-learning calculations. Prime among those is Google’s Tensor Processing Unit, or TPU, which Google has deployed in its data centers since early in 2015.

In that article, I speculated that the TPU was likely designed for performing what are called  “inference” calculations. That is, it’s designed to quickly and efficiently calculate whatever it is that the neural-network it’s running was created to do. But that neural network would also have to be “trained,” meaning that its many parameters would be tuned to carry out the desired task. Training a neural network normally takes a different set of computational skills: In particular, training often requires the use of higher-precision arithmetic than does inference.

Yesterday, Google released a fairly detailed description of the TPU and its performance relative to CPUs and GPUs. I was happy to see that the surmise I had made in January was correct: The TPU is built for doing inference, having hardware that operates on 8-bit integers rather than higher-precision floating-point numbers.

Yesterday afternoon, David Patterson, an emeritus professor of computer science at the University of California, Berkeley and one of the co-authors of the report, presented these findings at a regional seminar of the National Academy of Engineering, held at the Computer History Museum in Menlo Park, Calif. The abstract for his talk summed up the main point nicely. It reads in part: “The TPU is an order of magnitude faster than contemporary CPUs and GPUs and its relative performance per watt is even larger.”

Google’s blog post about the release of the report shows how much of a difference in relative performance there can be, particularly in regard to energy efficiency. For example, compared with a contemporary GPU, the TPU is said to offer 83 times the performance per watt.  That might be something of an exaggeration, because the report itself claims only that there’s a range of between 41 times and 83 times. And that’s for a quantity the authors call incremental performance. The range of improvement for total performance is less: from 14 to 16 times better for the TPU compared with that of a GPU.

The benchmark tests used to reach these conclusions are based on a half dozen of the actual kinds of neural-network programs that people are running at Google data centers. So it’s unlikely that anyone would critique these results on the basis of the tests not reflecting real-world circumstances. But it struck me that a different critique might well be in order.

The problem is this: These researchers are comparing their 8-bit TPU with higher-precision GPUs and CPUs, which are just not well suited to inference calculations. The GPU exemplar Google used in its report is Nvidia’s K80 board, which performs both single-precision (32-bit) and double-precision (64-bit) calculations. While they’re often important for training neural networks, such levels of precision aren’t typically needed for inference.

In my January story, I noted that Nvidia’s newer Pascal family of GPUs can perform “half-precision” (16-bit) operations and speculated that the company may soon produce units fully capable of 8-bit operations, in which case they might be much more efficient when carrying out inference calculations for neural-network programs.

The report’s authors anticipated such a criticism in the final section of their paper; there they considered the assertion (which they label a fallacy) that “CPU and GPU results would be comparable to the TPU if we used them more efficiently or compared to newer versions.” In discussing this point, they say they had tested only one CPU that could support 8-bit calculations, and the TPU was 3.5 times better. But they don’t really address the question of how GPU’s tailored for 8-bit calculations would fare—an important question if such GPUs soon became widely available.

Should that come to pass, I hope that these Googlers will re-run their benchmarks and let us know how TPUs and 8-bit-capable GPUs compare.

Fidelis Cybersecurity uncovered the Scanbox script in advance of Chinese President Xi’s visit to the U.S.

Suspected Chinese Malware Found On U.S. Trade Group Website

A U.S. cybersecurity company has uncovered a malicious script on the website of the National Foreign Trade Council, a public policy and lobbying organization devoted to U.S. trade policy. And John Bambenek, threat intelligence manager for Fidelis Cybersecurity, whose team found the script, says he is “highly confident” the script was placed there by Chinese state-sponsored actors.

The script is a tool known as a Scanbox. It has, to date, been used only by groups widely known to be affiliated with the Chinese government. “There's no evidence that anybody else has commandeered or used [Scanbox],” Bambenek says.

The script provides information about a victim's operating system, IP address, and software programs, which attackers can later use in targeted phishing campaigns. For example, if attackers learn that someone is using a browser with known software holes, they may target that person with an exploit that the hackers know will work for the user’s particular version.

Fidelis believes this particular operation, which was observed between 27 February and 1 March, was conducted as espionage in preparation for Chinese president Xi Jinping's meeting with U.S. President Trump today and Friday. Bambenek believes the tool was being used to collect intelligence about trade policy rather than to steal trade secrets from U.S. companies.

Hidden within the National Foreign Trade Council’s site, the Scanbox script ran whenever a visitor navigated to a page with a registration form for an upcoming Board of Directors meeting. That means the script, which has been removed, likely targeted board members, many of whom are also from major U.S. companies.

Bambenek calls Scanbox “a fairly lightweight tool” that is primarily used for gathering information. Chinese groups have relied on it for reconnaissance since at least 2014. Once a victim closes the tab or browser in which Scanbox is operating, they are no longer affected.

Fidelis was alerted to the script when cybersecurity programs it had developed were automatically triggered by software that appeared to be Scanbox. Fidelis says it has shared the information about Scanbox with the Federal Bureau of Investigation.   

Mike Buratowski, vice president of cybersecurity services with Fidelis, says nonprofits and think tanks are increasingly targeted by state-sponsored attackers because they have access to privileged information and are in touch with government agencies.

“The reality is that almost every government in the world has think tanks and policy organizations, and all of these are really the soft targets of government,” Bambenek says.

A user holds up a new Samsung Galaxy S8, to demonstrate the smartphone's iris scanning preview screen, which shows two circles that users can rely on as a guide to position their eyes..

The Company Behind the Samsung Galaxy S8 Iris Scanner

Last week, Samsung revealed its new smartphone, the Samsung Galaxy S8, which users can unlock with a quick glance. Since the big debut, we’ve learned that the iris scanner in the S8 comes from a little-known biometric security company in New Jersey called Princeton Identity.

CEO Mark Clifton says the company’s technology can produce an accurate scan in varying light conditions from arm’s length, even if the user isn’t standing completely still. Those features persuaded Samsung that iris scanners, which are already common in building security systems, were ready to be integrated into its popular line of smartphones.

“They became convinced that we were the real deal when we were able to show them iris recognition working outdoors in a sunny parking lot, when none of the other competitors could do that,” Clifton says.

Adding an iris scanner to a smartphone is a big decision, because it requires extra hardware and modifications to the body of the phone. Clifton estimates the total cost of adding this form of biometric security works out to be less than $5 per handset. That’s still a lot of money for an industry in which any manufacturer can build a smartphone, but few can do it profitably.  

If you look closely at the S8, there are three dots and one long dash right above the screen. The middle dot is the selfie camera and the thin slit is the proximity sensor, neither of which play a role in iris scanning.

The dot on the far left, however, is an LED that produces near-infrared light. And the dot on the far right is a camera equipped with a special filter that blocks most visible light but allows infrared waves to pass through.

To produce a scan, the LED emits infrared waves that penetrate just below the surface layer of the iris (the colored part of the eye) and reflect back to the infrared camera. This camera can then produce a high-contrast scan of the iris based on those reflections of infrared light from the eye. The proprietary piece of Princeton’s technology is the pattern of the pulse, or strobe, of the LED that produces the infrared light, and the design of the filter that blocks out visible light and yields the high-contrast scan.  

A user’s first scan captures about 250 points of reference from the iris, the part of the eye that includes a pair of muscles that dilate and constrict the pupil to let more or less light in. This compares favorably with the 20 to 70 points that a fingerprint sensor gathers. An iris scan may show the contours of muscles, the patterns of blood vessels, or other artifacts, such as strands or folds of tissue, within the iris. 

All of the information about those reference points is stored in a template in the phone’s “trust zone,” a specialized area of hardware where sensitive data is encrypted. When a user wants to unlock their phone, software compares the iris pattern in the latest scan to the pattern in the original template.   

Many of the elements within the iris are shaped during early development as well as by genetics, so even identical twins would have unique templates. For people who wear glasses, Princeton recommends users take them off to do their original scan, but Clifton says the iris scanner should generally work even with their glasses on.

Dr. Kevin Miller, a corneal surgeon who performs artificial iris transplants at the UCLA Stein Eye Institute, points out that the muscle contours of the iris change considerably based on lighting conditions and pupil dilation. And there are other factors that could produce errors in an iris scan over the course of a person’s lifetime.

“What happens if you're scanning somebody with diabetes and they have a little hemorrhage in the eye? Now that hemorrhage shows up on the scan and it's not going to recognize them,” he says. “There's issues like that with all these biometric methods.”

A user can create a new scan of their iris at any time. And the template that’s stored in the trust zone is a digital representation of the contrast points on their iris, rather than an actual image of the iris. Storing the image itself would create another security problem because, unlike passwords or credit card numbers, a person’s iris pattern can’t be revoked or updated.

Clifton says with their technology, the chances of producing a false positive are about 1 in 1.1 million for a scan of a single eye and 1 in 1.4 trillion for a scan of both eyes. "You do approach DNA-level type of accuracies with a duel-eye recognition,” Clifton says.

The company says they’ve also incorporated “liveness detection” into the scanner so that the iris scanner can’t be fooled by a photograph—a common problem for facial recognition technology—though Clifton wouldn’t say much about how this feature works.

Samsung actually debuted Princeton’s iris scanners in the Galaxy Note7, which had a brief run of sales in 2016 before a mass recall. The only change to the technology for the S8 appears to be cosmetic—this time, Samsung implemented a full color live preview mode with two circles on the screen to help users position their eyes. The ill-fated Note7 preview was in black and white. “Hopefully this will go much smoother,” Clifton says.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More