Tech Talk iconTech Talk

satellite in space

Scientists Figure Out Possible New Threat to Spacecraft

Given how hard it is to diagnose failures from thousands of kilometers away, perhaps it shouldn’t be much of a surprise that more than half of satellite electrical failures remain unexplained. According to scientists and engineers at Stanford University and Boston University one culprit could be dust-size particles streaking through space at tens of kilometers per second. These micrometeoroids don’t pack enough punch to get through a spacecraft’s hull. But according to new simulations reported this week in the journal Physics of Plasmas, when these micrometeoroids hit, they vaporize into a plasma that generates a potentially crippling pulse of radio-frequency radiation.

Read More
Hyperledger Indy will incubate Sovrin, a permissioned open ledger built for identity management

Can a Bitcoin Blockchain–Inspired Ledger Give Individuals Control Over Their Online Identities?

The Sovrin Foundation, a non-profit organization building online identity management tools with blockchain-inspired technologies, announced today that it will be taken on for incubation by Hyperledger Indy, a project run by the the Linux Foundation.

The problem of identity has attracted a whole flock of developers in the blockchain and distributed ledger space who see these technologies as a way to scoop up all the scraps of an individual’s online identity, consolidate them and put them under the individual’s control.

Today, for the most part, we lack that control. We have surrendered ourselves to the likes of Facebook, Google, Twitter and Amazon, whose profiles on their customers are so extensive that they are now, themsleves, used as standard identity verifiers across most Internet domains. Want to leave a comment? Just sign in with Facebook. Trying to get into your Medium account? Just login with Twitter. 

And if those companies suddenly disappear, so too does your online identity.

Meanwhile, asserting more important things about yourself online is just as difficult as ever. You can efile your taxes, but first you’ll need that PIN from the IRS that you set up a bajillion years ago that somehow proves you are who you say you are. 

It’s a terrible mess. And according to Phil Windley, the chair of the Sovrin Foundation, the best way to fix it is to use distributed ledger technology to make something that looks more like what we have offline. 

“In the physical world I go to my pharmacy and they ask for my driver’s license to prove I’m over 18 and I supply it to them. They don’t have to have a direct connection to the Department of Motor Vehicles. They don’t have to have any kind of API integration to make that work. Because I am the conveyer of this verifiable claim called a driver’s license. That hasn’t been possible on the Internet and Sovrin makes that possible,” says Windley.

In this alternate view, it is the individual who possesses all the pieces of their identity, which ranges from mundane testimonials about what your favorite movie is, to critical information like age and date of birth. 

In Sovrin, these facts about you (or pointers for where to find these facts) would all reside on a distributed public ledger which you alone had the authority to access and share. Other entities, however, could modify your claim by signing off on them with a cryptographic key, thereby adding weight and credibility to the pieces of your identity. For example, you may have an identity on the Sovrin network which specifies your driver’s license number and that information might be signed by your state’s DMV. 

The technology has a slight whiff of blockchain, but doesn’t really have a blockchain. Rather, it is a ledger that is replicated over mulitple nodes that all coordinate to make updates and police the system and which together make up the Sovrin network. The nodes are invite-only, meaning that the ledger is public, but permissioned. As a result, Sovrin functions without the participation of miners, which makes it less expensive and less energy hungry than your typical open blockchain.

Windley says that he envisions the first applications coming from the financial sector. Banks could participate as node operators to maintain the ledger and provide it as the repository for their customers’ identities. If given permission by the customer, multiple banks could access this information in a single place in order to comply with Know-Your-Customer (KYC) regulations. 

In joining Hyperledger Indy, Sovrin is donating all of its code and getting back developer power in return.

There are currently many other groups working in the blockchain and distributed ledger space to build self-sovereign identity systems. Bitnation began using the blockchain to issue its own nation state-independent version of a passport in 2014. That project now resides on the Ethereum network which also supports another identity management tool called uPort. And Civic is building out a similar project on Bitcoin.

Windley doesn’t necessarily see them as competition. “I believe that there won’t be a single identity solution; there’s going to be multiples,” he says. “We’re going to live in a world with multiple identity systems because they have different properties and [meet] different needs.”

A digital seen of a plaza with with bubbles representing data shooting out from several black lamp posts toward people holding smartphones.

Intel, Nokia, Qualcomm Bet on MulteFire to Blend LTE and Wi‑Fi

A wireless industry consotium is developing a new technology called MulteFire that it says delivers the high performance of 4G LTE cellular networks while being as easy to deploy as Wi-Fi routers.

Rather than relying on the licensed spectrum purchased for today’s LTE service, MulteFire operates entirely in the unlicensed 5 gigahertz band. And to set it up, users would simply need to install MulteFire access points, similar to Wi-Fi access points, at any facility served by optical fiber or wireless backhaul.

Once installed, MulteFire would provide greater capacity, range, and coverage than Wi-Fi, because it’s based on advanced LTE standards. But by operating in unlicensed spectrum, MulteFire could conserve resources for companies struggling to meet customers’ data demands.

Depending on how MulteFire is used, it could let cellular companies offload traffic to unlicensed spectrum, or allow factory owners to set up private MulteFire networks to serve equipment, robots, and Internet-0f-Things devices. The technology is being developed by the MulteFire Alliance founded by Nokia, Qualcomm, Ericsson, and Intel.

Marcus Weldon, president of Nokia Bell Labs and chief technology officer of Nokia, laid out his vision for MulteFire during a meeting at Nokia Bell Labs in New Jersey last week. As Weldon sees it, managers of industrial facilities will be the primary customers for MulteFire and will want to use it to connect millions of devices for oil and gas drilling, power transmission, and manufacturing.

“No consumers are saying, ‘Damnit, give me MulteFire!’” he says. “Or at least, I haven’t found one yet. But some industries are.”

Read More
Will the reality of 5G live up to the hype?

5G Progress, Realities Set in at Brooklyn 5G Summit

5G technologies are early in their development, and the business cases for them are a bit fuzzy, but wireless researchers and executives still had plenty to celebrate this week at the annual Brooklyn 5G Summit. They’ve made steady progress on defining future 5G networks, and have sped up the schedule for the first phase of standards-based 5G deployments.

Now, the world is just three years away (or two, depending on who you ask) from its first 5G commercial service. Amid the jubilance, reality is also starting to set in.

While attendees can agree that 5G networks will incorporate many new technologies—including millimeter waves, massive MIMO, small cells, and beamforming—no one knows how all of it will work together, or what customers will do with the resulting flood of data. The video below provides a primer on these technologies, and a hint of what we can expect.

Read More
Illustration: iStockphoto

Four Ways to Tackle H-1B Visa Reform

Update 19 April 2017: Yesterday, U.S. President Donald Trump signed an executive order instructing government agencies to suggest reforms to the H-1B Visa program. Analysts say that real reform will require Congressional action.  In February, IEEE Spectrum interviewed experts about what Congress could do. The original article follows:

U.S. tech companies love the H-1B visa program. The temporary visa is meant to allow them to bring high-skill foreign workers to fill jobs for which there aren’t enough skilled American workers.

But the program isn’t working. Originally intended to bring the best global talent to fill U.S. labor shortages, it has become a pipeline for a few big companies to hire cheap labor.

Giants like Amazon, Apple, Google, Intel, and Microsoft were all among the top 20 H-1B employers in 2014, according to Ron Hira, professor of political science at Howard University who has testified before Congress on high-skill immigration. The other fifteen—which include IBM but also consulting firms such as Tata Consultancy, Wipro, and Infosys—used the visa program mainly for outsourcing jobs.

Typically, U.S. companies like Disney, FedEx, and Cisco will contract with consulting firms. American workers end up training their foreign counterparts, only to have the U.S. firm replace the American trainers with the H-1B visa holding trainees—who’ll work for below-market wages.

Problems with this setup abound. First, talk of a tech labor shortage in the U.S. might be overblown. Then there’s the issue of quality: More than half of the H-1Bs at a vast majority of the top H-1B employers have bachelors degrees, but not advanced degrees. Hira argues that in many cases such as Disney and Northeast Utilities, the jettisoned American workers were obviously more skilled and knowledgeable than the people who filled those positions, considering the fact that they trained their H-1B replacements.

Plus, the H-1B is a guest-worker program where the employer holds the visa and isn’t required to sponsor the workers for legal permanent residency in the United States. So if the worker loses the job, he or she is legally bound to return to their country of origin. This gives the employer tremendous leverage, and can lead to abuse.

“It’s a lose-lose right now for the country and H-1B workers,” says Vivek Wadhwa, distinguished fellow and professor at Carnegie Mellon University Engineering at Silicon Valley.

Read More
The Holoplot audio system, a large array of black speakers, looks like a large black rectangle with hundreds of depressions of different sizes within. It is displayed at CeBIT, an annual trade show in Hanover, Germany.

Berlin Startup Holoplot Tests Steerable Sound in German Train Stations

A Berlin startup named Holoplot has built a premium audio system that it says can send one song or announcement to one corner of a room, and an entirely different message or tune to another area of the same room—without any interference between the two.

Holoplot is testing its technology in major train stations throughout Germany, where it says the system can send up to 16 message to separate gates at once, all at the same frequencies. It ran its first pilot at Frankfurt Hauptbahnhof, Germany’s largest train station, in December.

Read More
robot strikes a thinker pose

AI Learns Gender and Racial Biases From Language

Artificial intelligence does not automatically rise above human biases regarding gender and race. On the contrary, machine learning algorithms that represent the cutting edge of AI in many online services and apps may readily mimic the biases encoded in their training datasets. A new study has shown how AI learning from existing English language texts will exhibit the same human biases found in those texts.

The results have huge implications given machine learning AI's popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests. In the new study, computer scientists replicated many of those biases while training an off-the-shelf machine learning AI on a "Common Crawl" body of text—2.2 million different words—collected from the Internet.

Read More
Person watching a movie on a laptop

We Know What You're Watching (Even If It's Encrypted)

I stand firm in the opinion that it’s my basic, human right to binge-watch six hours of trashy detective shows on a Friday night with a silent phone in my lap and a glass of wine in my hand. I would also argue it’s my right to do so shamefully and in private, divulging the secret of my wasted weekends to no one but Netflix.

Netflix, it seems would agree with me. The company has been protecting video streams with HTTPS encryption since the summer of 2016. But new research indicates that this strategy is not sufficient to keep third party service providers and motivated attackers from getting a peek at what I’m watching.

Two recent papers, one from West Point Academy, and one by a collection of authors at Tel Aviv University and Cornell Tech, lay out methods for identifying videos by performing straightforward traffic analysis on encrypted data streams. One approach opens the door for snooping by any party that has direct access to the network on which a user is watching videos, such as an ISP or a VPN provider. The other could be used by any attacker who is able to deliver malicious Javascript code to the user’s browser. But both inspect the size of data bursts being transferred across the user’s network in order to fingerprint individual videos and compare them to a database of known, previously characterized content.

Read More
Google headquarters logo on Mountain view California glass office building.

Open-Source Clues to Google's Mysterious Fuchsia OS

This is a guest post. The views expressed in this article are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

It’s not often that one of the world’s leading software companies decides to develop a major new operating system. Yet in February 2016, Google began publishing code for a mysterious new platform, known as Fuchsia.

Google has officially said very little about Fuchsia, and the company did not respond to my request for comment. But since it’s being developed as an open-source project, its source code is entirely in the open for anyone to view. Indeed, anyone can download Fuchsia right now and try to run it.

Many people wrote about Fuchsia when it was first spotted last year. They raised the obvious questions of whether it meant that Google would be moving away from Linux within Android.

Since then, I have been periodically looking at the source code for signs of the company’s plans for its new operating system. According to Google, Fuchsia is designed to scale from small Internet of Things devices to modern smartphones and PCs.

Google, of course, already has two consumer operating systems. Within the tech industry, there is a well-known conflict between Google’s Android and Chrome OS. Android, which is primarily used in smartphones and tablets, is the most popular operating system in the world by device shipments and Internet usage and has a thriving native app ecosystem. Meanwhile, Chrome OS, which was designed for PCs, is much more secure than Android and provides a simplified computing environment that’s well suited for the education market.

While Google executives have denied that the two platforms would ever merge, there has been much internal debate over the years about how best to unify Google’s software efforts. Meanwhile, many consumers want Android as a PC platform, due to its greater capabilities and ample software offerings compared with those of Chrome OS.

In my eyes, Fuchsia is Google’s attempt to build a new operating system that advances the state of the art for consumer platforms and corrects many of the long-standing shortcomings of Android. The engineering goals of the project appear to include a more secure design, better performance, enabling timely updates, and a friendlier and more flexible developer API (application programming interface).

Read More
Google's Tensor Processing Unit board

Google Details Tensor Chip Powers

In January’s special Top Tech 2017 issue, I wrote about various efforts to produce custom hardware tailored for performing deep-learning calculations. Prime among those is Google’s Tensor Processing Unit, or TPU, which Google has deployed in its data centers since early in 2015.

In that article, I speculated that the TPU was likely designed for performing what are called  “inference” calculations. That is, it’s designed to quickly and efficiently calculate whatever it is that the neural-network it’s running was created to do. But that neural network would also have to be “trained,” meaning that its many parameters would be tuned to carry out the desired task. Training a neural network normally takes a different set of computational skills: In particular, training often requires the use of higher-precision arithmetic than does inference.

Yesterday, Google released a fairly detailed description of the TPU and its performance relative to CPUs and GPUs. I was happy to see that the surmise I had made in January was correct: The TPU is built for doing inference, having hardware that operates on 8-bit integers rather than higher-precision floating-point numbers.

Yesterday afternoon, David Patterson, an emeritus professor of computer science at the University of California, Berkeley and one of the co-authors of the report, presented these findings at a regional seminar of the National Academy of Engineering, held at the Computer History Museum in Menlo Park, Calif. The abstract for his talk summed up the main point nicely. It reads in part: “The TPU is an order of magnitude faster than contemporary CPUs and GPUs and its relative performance per watt is even larger.”

Google’s blog post about the release of the report shows how much of a difference in relative performance there can be, particularly in regard to energy efficiency. For example, compared with a contemporary GPU, the TPU is said to offer 83 times the performance per watt.  That might be something of an exaggeration, because the report itself claims only that there’s a range of between 41 times and 83 times. And that’s for a quantity the authors call incremental performance. The range of improvement for total performance is less: from 14 to 16 times better for the TPU compared with that of a GPU.

The benchmark tests used to reach these conclusions are based on a half dozen of the actual kinds of neural-network programs that people are running at Google data centers. So it’s unlikely that anyone would critique these results on the basis of the tests not reflecting real-world circumstances. But it struck me that a different critique might well be in order.

The problem is this: These researchers are comparing their 8-bit TPU with higher-precision GPUs and CPUs, which are just not well suited to inference calculations. The GPU exemplar Google used in its report is Nvidia’s K80 board, which performs both single-precision (32-bit) and double-precision (64-bit) calculations. While they’re often important for training neural networks, such levels of precision aren’t typically needed for inference.

In my January story, I noted that Nvidia’s newer Pascal family of GPUs can perform “half-precision” (16-bit) operations and speculated that the company may soon produce units fully capable of 8-bit operations, in which case they might be much more efficient when carrying out inference calculations for neural-network programs.

The report’s authors anticipated such a criticism in the final section of their paper; there they considered the assertion (which they label a fallacy) that “CPU and GPU results would be comparable to the TPU if we used them more efficiently or compared to newer versions.” In discussing this point, they say they had tested only one CPU that could support 8-bit calculations, and the TPU was 3.5 times better. But they don’t really address the question of how GPU’s tailored for 8-bit calculations would fare—an important question if such GPUs soon became widely available.

Should that come to pass, I hope that these Googlers will re-run their benchmarks and let us know how TPUs and 8-bit-capable GPUs compare.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More