Tech Talk iconTech Talk

A close-up image of a finger pushing a red key titled

What is a Distributed Denial-of-Service Attack and How Did it Break Twitter?

On Friday, multiple distributed denial-of-service, or DDoS, attacks hit the Internet services company Dyn. The cyberattack prevented many users on the U.S. East Coast from navigating to the most popular websites of Dyn customers, which include Twitter, Reddit, and Netflix.

Dyn detected the first attack at 7:10 a.m. Eastern time on Friday, and restored normal service about two hours later. Then at 11:52 a.m. ET, Dyn began investigating a second attack. By 2:00 p.m., the company said it was still working to resolve “several attacks” at once.

The interruptions inconvenienced many Internet users, and the daily operation of Internet giants in entertainment, e-commerce, and social media. There still aren’t many details available about Dyn’s predicament, and the company did not immediately respond to an interview request. But we do know from Dyn’s posts that the first two assaults on its network were DDoS attacts. Its customers’ outages again show that major Internet companies remain vulnerable to this common hacker scheme—one that has plagued networks since 2000.

A denial-of-service attack aims to slow or stop users from accessing content or services by impeding the ability of a network or server to respond to their requests. The word “distributed” means that hackers executed the Dyn attacks by infecting and controlling a large network of computers called a botnet, rather than running it from a single machine that they own.

Hackers can assemble a botnet by spreading malware, which is often done by prompting unsuspecting users to click a link or download a file. That malware can be programmed to periodically check with a host computer owned by hackers for further instructions. To launch an attack, the hackers, or bot-herders, send a message through this “command and control” channel, prompting infected computers to send many requests for a particular website, server, or service all at once. Some of the biggest botnets in history have boasted 2 million computers, capable of sending up to 74 billion spam emails a day.

The sudden onslaught of requests quickly gobbles up all the network's bandwidth, disk space, or processing power. That means real users can’t get their requests through because the system is too busy trying to respond to all the bots. In the worst cases, a DDoS can crash a system, taking it completely offline.

Both of Friday’s attacks targeted Dyn’s Managed Domain Name System. Through this system, Dyn provides a routing service that translates Web addresses that users type into a browser, such as Users who type in a Web address are first sent through a Dyn server that looks up the IP address for a server that hosts the content the user is trying to reach. The Dyn server passes this information on to the user's browser.

To disrupt this process, says Sanjay Goel, a professor of information technology at New York’s University at Albany, SUNY, the bot-herders probably sent tons of translation requests directly to Dyn’s servers by looking up the servers’ IP addresses. They could have also simply asked the bots to send requests for and to cause similar issues. Attacking a DNS or a content delivery provider such as Dyn or Akamai in this manner gives hackers the ability to interrupt many more companies than they could by directly attacking corporate servers, because several companies share Dyn's network.

In Dyn’s case, it has built its Managed DNS on an architecture called Anycast, in which any particular IP address for a server in its system can actually be routed through servers in more than a dozen data centers. So, if the IP address of one server is targeted, ten others may still be able to handle the normal traffic while it's beseiged with bot requests. Art Manion, a technical manager at Carnegie Mellon University’s Software Engineering Institute, says this system should make Dyn more resilient to DDoS attacks, and the company has touted it as highly secure.

Dyn said on Friday in an update to its website that the first attack mainly impacted services in the “US East.” The Anycast network includes data centers in Washington, D.C., Miami, and Newark, N.J., as well as in Dallas and Chicago—though it’s not clear whether these locations were specifically targeted.    

Even in the affected region, only certain users experienced issues. One reason could be that other users' browsers had previously used Dyn to locate the specific server they needed to recover, say, Because that information is now cached in their browsers, those users can bypass Dyn to fetch the desired content, so long as the servers that store Twitter’s website are still functioning.

Another reason for the inconsistent impacts could be that a common mechanism for handling DDoS attacks is to simply drop every fifth request from the queue in order to relieve the network of traffic. The result: Some requests from legitimate users wind up being dropped along with those from bots.

Once an attack begins, companies can bring backup servers online to manage the blizzard of requests. Victims can also work with Internet service providers to block the IP addresses of devices generating the most traffic, which means that they're likely part of the botnet. "You start blocking the different addresses where it's coming from, so depending on how massive the botnet is, it may take some time," says SUNY Albany's Goel.

Increasingly, bot-herders have recruited Internet of Things devices, which often have poor security, to their ranks. This allows them to launch ever more powerful attacks because of the sheer numbers of such devices. Two of the largest DDoS attacks on record have occurred within the past two months: first, a 620-gigabits-per-second attack targeting independent security reporter Brian Krebs of; and then, a 1100-Gbps siege on the French hosting company OVH.

Even with state-of-the-art protections and mitigation strategies, companies are limited by the amount of bandwidth they have to handle such sudden onslaughts. “Ultimately, Akamai has total X amount of bandwidth and if the attacker is sending X-plus-10 traffic, the attacker still wins,” says Carnegie Mellon's Manion. “It mathematically favors whoever has more bandwidth or more traffic, and the attackers today can have more traffic.”

Dyn’s global network manages over 500 billion queries a month, so the culprits would have had to send many millions or even billions of requests simultaneously in order to stall it. Manion says that to prevent DDoS attacks, companies must address root causes such as poor IoT security, rather than scrambling to stop them once they’ve begun.

Stanford University Ising Machine

New Computer Combines Electronic Circuits with Light Pulses

Modern computers still lack the capability to find the best solution for the classic “traveling salesman” problem. Even finding approximate solutions is challenging. But finding the shortest traveling salesman route among many different cities is more than just an academic exercise. This class of problems lies at the heart of many real-world business challenges such as scheduling delivery truck routes or discovering new pharmaceutical drugs. 

Read More
Stanford computer science professor calls for greater ethnic and gender diversity in artificial intelligence

Computer Vision Leader Fei-Fei Li on Why AI Needs Diversity

As Fei-Fei Li sees it, this is a historical moment for civilization fueled by an artificial intelligence revolution. “I call everything leading up to the second decade of the twenty-first century AI in-vitro,” the Stanford computer science professor told the audience at last week’s White House Frontiers Conference. Heretofore, the technology was being fundamentally understood, formulated, and tested in labs. “At this point we’re going AI in-vivo,” she said. “AI is going to be deployed in society on every aspect of industrial and personal needs.”

It’s already around us in the form of Google searches, voice-recognition, and autonomous vehicles. Which makes this a critical time to talk about diversity.

The lack of diversity in AI is representative of  the state of computer science and the tech industry in general. In the United States, for example, women and ethnic minorities such as African-Americans and Latinos are especially underrepresented. Just 18 percent of computer science grads today are women, down from a peak of 37 percent in 1984, according to The American Association of University Women. The problem is worse in AI. At the Recode conference this summer, Margaret Mitchell, the only female researcher in Microsoft’s cognition group, called it “a sea of dudes.” 

But the need for diversity in AI is more than just a moral issue. There are three reasons why we should think deeply about increasing diversity in AI, Stanford’s Li says.

The first is simply practical economics. The current technical labor force is not large enough to handle the work that needs to be done in the fields of computing and AI. There isn’t much in the way of specific numbers on diversity in AI, but anecdotal evidence say they’d probably be dismal. Take, for instance, Stanford’s computer science department. AI has the smallest percentage of women undergrads, at least as compared to tracks like graphics or human-computer interaction, Li points out. Worldwide, the GDP from automation and machine learning is expected to rise. So it’s really important that more people study AI, and that they come from diverse backgrounds. “No matter what data we look at today, whether it’s from universities or companies, we lack diversity,” she says.

Another reason diversity should be emphasized is its impact on innovation and creativity. Research repeatedly shows that when people work in diverse groups, they come up with more ingenuous solutions. AI will impact many of our most critical problems, from urban sustainability and energy to healthcare and the needs of aging populations. “We need a diverse group of people to think about this,” she says.

Last, but certainly not the least, is justice and fairness. To teach computers how to identify images or recognize voices, you need massive data sets. Those data sets are made by computer scientists. And if you only have seas of (mostly white) dudes making those data sets, biases and unfairness inadvertently creep in. “Just type the word grandma in your favorite search engine and you’ll see the bias in pictures returned,” Li says. “You’ll see the race bias. If we’re not aware of the bias of data, we’re going to start creating really problematic issues.”

What can we do about this? Bring a humanistic mission statement to the field of AI, Li says. “AI is fundamentally an applied technology that’s going to serve our society,” she says. “Humanistic AI not only raises the awareness of the importance of the technology, it’s a really important way to attract diverse students, technologists and innovators to participate.”

How is artificial intelligence going to affect the world of work?

What AI Experts Say Smart Machines Will Do to Human Jobs

For centuries, technological innovation has created jobs and improved standards of living. Artificial intelligence might change that. For starters, AI-driven automation is not going to treat workers equally. A recent White House called Preparing for the Future of Artificial Intelligence acknowledges that AI could make low- and medium-skill jobs unnecessary, and widen the wage gap between lower- and higher-educated workers.

The good news is that policymakers and technology experts are thinking about this, and instituting plans aimed at avoiding the “Robots are going to take all of our jobs!” doomsday scenario. Academics and industry practitioners discussed AI’s job impact at the White House Frontiers Conference last week. And they were confident and optimistic about our ability to adapt.

“The best solutions are always going to come from minds and machines working together,” said Andrew McAfee, co-director of the MIT Initiative on the Digital Economy, and author of “The Second Machine Age.” But that balance of minds and machines won’t always be the same. In five years, that balance will be totally different in, say, customer service and driving.

The good news is that the U.S. economy is really good at creating new jobs once old ones get automated. As an example, McAfee pointed out that the year of peak manufacturing employment in the United States was 1979. Every year since, the number of people working in the industry has gone down even though output goes up. “Those people didn’t become unemployed and their families didn’t starve,” he said.

Read More
Panel on Best Practices in AI

Facebook, Microsoft, and IBM Leaders on Challenges for AI and Their AI Partnership

Late last month, Amazon, Facebook, Google, IBM, and Microsoft announced that they will create a non-profit organization called Partnership on Artificial Intelligence. At the White House Frontiers Conference held at Carnegie Mellon University today, thought leaders from these companies explained why AI has finally arrived and what challenges lie ahead. (Also read the White House’s report on the future of AI released yesterday.)

While AI research has been going on for more than 60 years, the technology is now at an inflection point, the panelists agreed. That has happened because of three things: faster, more powerful computers; critical computer science advances, mainly statistical machine learning and deep learning techniques; and the massive information available due to sensors and the Internet of Things.

Read More
Google's DeepMind lab gave deep learning software external memory to perform tasks such as navigating the London Underground map

Google's Deep Mind Gives AI a Memory Boost That Lets It Navigate London's Underground

Google’s DeepMind artificial intelligence lab does more than just develop computer programs capable of beating the world’s best human players in the ancient game of Go. The DeepMind unit has also been working on the next generation of deep learning software that combines the ability to recognize data patterns with the memory required to decipher more complex relationships within the data.

Read More
The announcement that the 2016 Nobel Prize in Physics will go to Michael Kosterlitz, David J. Thouless, and Duncan Haldane.

Physics Nobel Work Is Leading to Improved Superconductors

This year’s Nobel Prize in Physics has been awarded to three physicists, “for theoretical discoveries of topological phase transitions and topological phases of matter.” Two of the scientists uncovered why the spins of atoms inside particular kinds of magnets form messy patterns at low temperatures. This theoretical work, performed in the 1970s, is still leading engineers to develop better and more efficient superconductors.

Every atom in a magnet acts like a mini bar magnet: its spin—a quantum mechanical property—points in a certain direction.  If every spin in a lump of material points in the same direction, it’s like you have one big bar magnet.

Michael Kosterlitz, now at Brown University, and David J. Thouless, now at the University of Washington, modeled 2-D layers of ferromagnets—the kind of magnets that stick to the fridge—at low temperature. Their thought experiments indicated that the atomic spins were not fully aligning over a long distance. In other words, the spins did not come together to form one big bar magnet.

They used the concept of vorticespockets of atoms inside magnets whose spins are oriented in a way that makes the pocket resemble the eye of a hurricane—to explain the effect. These vortices change the spins of nearby atoms.

The Nobel Prize winners “were really the first to use vortices to explain something that’s very profound in condensed matter physics,” says Michael Lawler, a theoretical physicist at Binghamton University in New York who studies magnetism and superconductivity.

At a press conference Tuesday, Kosterlitz said of his Nobel work: “There aren’t real practical applications and it’s not going to lead to any fancy new devices” because most devices are not two-dimensional.

Yet Lawler says that after the discovery, physicists started looking at other special materials where organization becomes disrupted. In particular, they looked at superconductors—materials that don’t resist the flow of electricity and allow large currents to pass on a relatively small wire.

Promising high-temperature superconductors are made of layers of 2-D material, he says. Inside superconductors, vortices take the form of whirlpools of electrons and have a disorder-inducing effect. 

Understanding the vortices mechanism is useful, Lawler says, in part because it helps researchers figure out how they introduce resistance in a superconductor.

Removing the vortices allows engineers to optimize superconductors’ performance, he says, so cables could someday deliver more power to more people. As an example, research in 2008 revealed that tightly coupling the layers of high-temperature superconducting material generates 3-D vortices, which don’t move around as much as 2-D vortices. The result: They don’t introduce as much resistance.

Besides Kosterlitz and Thouless, who also studied conductance with electrically conducting layers, Duncan Haldane was recognized for his studies of small chains of magnets. The prize was awarded to the researchers for their use of topology: mathematics that describes global relationships that stay the same when local relationships between elements change.

Those nice, tidy algorithms and predictive models might just be Weapons of Math Destruction.

Is Your Big Data Project a “Weapon of Math Destruction”?

For those of us who make a living solving problems, the current deluge of big data might seem like a wonderland. Data scientists and programmers can now draw on reams of human data—and apply them—in ways that would have been unthinkable only a decade ago.

But amid all the excitement, we’re beginning to see hints that our nice, tidy algorithms and predictive models might be prone to the same shortcomings that the humans who create them are. Take, for example, the revelation that Google disproportionately served ads for high-paying jobs to men rather than women. And there’s the troubling recent discovery that a criminal risk assessment score disproportionately flagged many African Americans as higher risk, sometimes resulting in longer prison sentences.

Mathematician and data scientist Cathy O’Neil has a name for these wide-reaching and discriminatory models: Weapons of Math Destruction. In her new book by the same name, she details the ways that algorithms often perpetuate or even worsen inequality and injustice.

We spoke to O’Neil last week during a Facebook Live session to find out how programmers and data scientists can ensure that their models do more good than harm.

Here are a few key takeaways:


1. Recognize the Signs of a “WMD”


They’re Important

A signature of a Weapon of Math Destruction is that it’s used to determine some critical element in the lives of many people. We’re already using algorithms to sort resumes for job openings, automatically schedule shifts for service industry workers, decide the price of insurance or interest rates on a loan, or even to help determine how long a person will spend in jail when convicted of a crime. Because these algorithms affect crucial outcomes for millions of people, they have the potential to do widespread damage.


They’re Secret or Unaccountable

The people most affected by WMD’s often don’t understand the rubric by which they’re being scored, or even that they’re being scored in the first place. The methodology behind them is often a “trade secret,” protecting it from public scrutiny. While many companies argue that this keeps people from learning the rules and gaming the system, the lack of transparency also means there’s no way to check whether the score is actually fair. Machine learning algorithms take this one step further; while they’re powerful tools for finding correlations, they’re also often black boxes, even to the people who create them.

They’re Destructive

Weapons of Math Destruction have a way of creating their own reality and then using that reality to justify their model, says O’Neil. An algorithm that, say, targets financially vulnerable people for predatory loans creates a feedback loop, making it even harder for them to get out of debt. Similarly, a model that labels a first-time drug offender as higher-risk because he comes from a high-crime neighborhood potentially makes that problem even worse. If his high risk score results in a longer jail sentence, he’ll have fewer connections to his community and fewer job prospects once he’s released. His score becomes a self-fulfilling prophecy, actually putting him at a greater risk of reoffending.


2. Realize There Is No Such Thing as an “Objective Algorithm”

One of the things that makes big data so attractive is the assumption that it’s eliminating human subjectivity and bias. After all, you’re basing everything on hard numbers from the real world, right? Wrong. Predictive models and algorithms, says O’Neil, are really just “opinions embedded in math.” Algorithms are written by human beings with an agenda. The very act of defining what a successful algorithm looks like is a value judgement; and what counts as success for the builders of the algorithm (frequently profit, savings, or efficiency) is not always good for society at large. Because of this, O’Neil says, it’s important for data scientists to look at the bigger picture. Who are the winners in my algorithm—and even more importantly—what happens to the losers?


3. Pay Attention to the Data You’re Using

There’s another reason that algorithms aren’t as trustworthy as we might think: The data they draw on often comes from a world that’s deeply prejudiced and unequal. Crime statistics might seem objective—that is, until you realize that, for example, the mechanisms of the U.S. criminal justice system have been applied unfairly to target minorities throughout its entire history. That bias shows up in crime data. Researchers know that black and white people use marijuana at almost identical rates, but black teenagers are much more likely to be arrested for marijuana possession. The disparity in the numbers has much more to do with systemic racial profiling and a ramped up police presence in historically black neighborhoods than it does with actual levels of criminality.

We’ve made the decision as a society to stamp out discrimination based on race, gender, sexual orientation, or disability status—and fortunately, most data scientists know to be very careful when using these attributes to categorize people or model behavior. But data from the real world is often fraught with less-obvious proxy variables that are essentially stand-ins for those characteristics. Zip codes, for example, are an easy proxy for race, thanks to decades of the discriminatory housing practice called redlining.


4. Get Honest About What You’re Really Modeling

Human behavior is messy, which often means that direct measurements of the attributes we’re trying to model (like criminality, trustworthiness, or fitness for a job) don’t actually exist. Because of this, data scientists often rely on other variables they believe might correlate with what they’re trying to measure.

Car insurance companies, for example, use credit scores as a way to determine how reliable a driver is. At first glance it sounds reasonable to assume that a person who regularly pays her bills on time might be more conscientious or responsible. But strangely, Consumer Reports recently discovered that people with low credit scores and clean driving records were being charged much more for car insurance that people with high credit scores and DUIs on their driving records.

This, of course, is nonsense. Having a previous DUI is a much better indicator of a driver’s likelihood of getting into an accident. But O’Neil asserts that there might be a hidden reason the insurance companies continue to incorporate credit score into their models: it’s a direct measurement of financial vulnerability. Drivers with low credit scores don’t have as much leverage to shop around for lower rates, and a person who’s desperate for insurance is often willing to pay much more to get it.


5. Examine and Systematically Test Your Assumptions

Even well-intentioned algorithms can have flawed assumptions built in. For example, the recidivism risk score mentioned earlier is an attempt to make communities safer by locking up potentially violent repeat offenders and releasing those who are deemed a lower risk. Other intended benefits would be reducing the prison population and making the justice system more fair. But once we lock people away, says O’Neil, we treat prisons as a black box and stop asking questions.

Online giants like take the opposite approach; learning and experimentation are built into their business model. Amazon has a dedicated data laboratory where researchers constantly reexamine every aspect of the consumer experience, finding places along the pipeline where customers get confused or frustrated, or can’t find what they need. This feedback allows Amazon to continuously learn and tweak its online environment to maximize profit.

If we truly wanted to optimize our criminal justice system for community safety, says O’Neil, we’d continuously be running controlled experiments: Does putting someone behind bars with other criminals make them more or less likely to commit a crime upon release? How beneficial are general-equivalency (alternative high school) diploma programs? What is the effect of solitary confinement? Of sexual abuse? How much does it cost to treat someone for a mental disorder, versus repeatedly locking him away?

6. Take The Modelers’ Hippocratic Oath:

Eventually we’ll need laws and industry standards that can keep pace with this technology and require a level of transparency from companies about how they’re using data. It might even require mandatory fairness audits of important algorithms. But in the meantime, a disproportionate amount of the responsibility falls to programmers. Awareness of the issue is a crucial first step. A good way to start is by taking this pledge, originally written by Emanuel Derman and Paul Wilmott in the wake of the 2008 financial crisis:


∼ I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

∼ Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

∼ I will never sacrifice reality for elegance without explaining why I have done so.

∼ Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

A researcher holds their thumb to the fingerprint sensor of a smartphone and touches a door handle at the same time.

Sending Passwords Through Your Body Could Be More Secure Than Transmitting Them Over The Air

Another day, another cybersecurity threat to worry about. Earlier this week, Johnson and Johnson told patients that it had discovered a security flaw in its insulin pumps, which left the pumps vulnerable to hacking—though the company said the risk of such a hack actually occurring is “extremely low.”

Meanwhile, a group of researchers at the University of Washington in Seattle is offering an alternative to wireless data transmission that could make medical devices and wearables more secure: transmitting the data through our bodies rather than broadcasting them over the air. Their premise is that it’s much harder to surveil a human body without someone noticing than it is to surreptitiously pluck a password from wireless signals in the air.

In tests with 10 subjects, the group showed that it’s possible to transmit passwords at speeds of up to 50 bits per second (bps) through the human body, using off-the-shelf products such as fingerprint scanners and iPhone fingerprint sensors. For comparison, a standard Internet package in the U.S. offers download speeds of 15 megabits-per-second, or 15 million bits per second.

“You can hold a phone in your hand and you can have a receiver on your leg, and you can actually receive signals very strongly,” says Shyamnath Gollakota, a wireless researcher at the University of Washington and collaborator on the project.

The experiments were led by graduate students Mehrdad Hessar and Vikram Iyer with the guidance of Gollakota. The group recently presented its work at the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing in Germany.

If the technique were ever to catch on, it would be limited to applications such as wearables, medical implants, and digital door locks because it requires users to simultaneously touch both the device that is sending the password and the one that is receiving it.

And the low bit rate means it would work best for transferring short strings of numbers such as a passcode rather than full sentences, or high-definition films. As an example, the group says sending a four-digit numerical code to a digital door lock would require fewer than 16 bits, which could be transmitted through the body in less than a second. A 256-bit serial number could be sent to a medical device in under 15 seconds.

Jeffrey Walling, an assistant professor at the University of Utah who has studied capacitive touch, says even this method of on-body password transferral wouldn’t be hackproof. “Certainly, any time you’re transmitting any type of signal, you can't make it 100 percent secure,” he says. But it could be an improvement over the wireless channels used today. 

In the past, other researchers have successfully demonstrated on-body communications but those projects often required users to add custom hardware onto their devices in order to pull it off. To see whether it was possible to do this with existing technology, the University of Washington group selected several commercial devices to test: an iPhone 5s; an iPhone 6s; a Lenovo touchpad; an Adafruit touchpad; and a Verifi P5100 fingerprint scanner.

The touchpad or fingerprint sensor on all of these devices use a concept called capacitive coupling—they connect to a 2-D grid of electrodes that measure capacitance, or the ability to store energy as an electric charge. When the device sends a voltage signal through either the row or column, it creates an electric field at the intersections. When a finger touches the screen, it affects the electric field and thereby changes the capacitance at that point. The device can use this change to detect the presence of a finger as well as characterize the peculiar patterns of swirls and ridges in a fingerprint.

When a finger touches the screen or scanner, it also offers a path for these signals to travel through the body. Skin isn’t a great conductor, so the signals travel instead through extracellular fluid found in blood vessels and muscles. The signals emitted by fingerprint sensors fall below 10 megahertz, which is important because higher-frequency signals would be absorbed by these same fluids. It’s an added bonus that sub–10 MHz signals do not travel well through the air. They degrade and become hard to detect after traveling just 6 centimeters from a fingerprint sensor or 20 centimeters from a touchpad.

For their demonstration, the researchers wanted to not only transmit a signal from a fingerprint sensor through the body, but also alter it in order to send a message. But due to security concerns, many device manufacturers don’t allow users to access the software or hardware that directly controls these signals.

So, the group had to improvise. They wrote software that initiated power cycling, which means it quickly turned the devices on and off, in effect sending a digital code with “on”  equaling a 1 and “off” meaning a 0. By using this technique, they could transmit messages using the signals that commercial devices were already generating.

To receive those messages, the group developed a bracelet wrapped in conductive copper tape that they attached to a subject’s arm, leg, or chest. This bracelet was connected to a receiver built from a USB TV tuner, an upconverter that could boost the low frequency signal to make it readable to the receiver, and a software-defined radio platform housed on a laptop.

With this system and their on-off code, the team transmitted password data at a maximum of 25 bps with the Verifi scanner, but managed 50 bps with the Adafruit touchpad. They found that the signal’s strength remained steady as it traveled throughout the entire body instead of degrading, as it would over air. Transmission was not significantly impacted by the height, weight, or posture of users, and when the group tested their system in the presence of other electronic devices, they found virtually no interference.

Gert Cauwenberghs, a biomedical researcher who has studied similar methods at the University of California, San Diego, thinks the group could achieve even higher data rates—potentially hundreds of bits per second—by gaining direct access to the fingerprint sensors.

For now, the group says that even the relative snail’s pace of 50 bps is sufficient to send a passcode that could unlock a door if a user were to touch their smartphone’s fingerprint sensor and the door handle at the same time. But Cauwenberghs points out that the convenience of this method only increases with speed. At the present low rates, “you'd probably have to hold your finger on that patch for a few seconds for this to authenticate,” he says.  

Before entrusting any such system with the passcodes to his own front door, Walling of the University of Utah says he’d like to see more statistical analysis about how often this technique generates false positive and negatives. “If they really can transmit a strong enough signal and do this repeatedly, I really do think it's something of potential,” he says.  

Cauwenberghs would also like to learn more about the biological impact of such transmissions before people start making a habit of using their bodies as communication links. The low frequencies used in this study have no known health impacts, but he says it would be best to study the effects of repeatedly sending such signals through the body in this manner before ruling it safe.

The Nvidia Titan X is one of the latest examples of GPU chips used in deep learning.

Fujitsu Memory Tech Speeds Up Deep-Learning AI

Artificial intelligence driven by deep learning often runs on many computer chips working together in parallel. But the deep-learning algorithms, called neural networks, can run only so fast in this parallel computing setup because of the limited speed with which data flows between the different chips. The Japan-based multinational Fujitsu has come up with a novel solution that sidesteps this limitation by enabling larger neural networks to exist on a single chip.

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More