In the latest TOP500 supercomputer ranking, published today, China’s supercomputers are still at the top of the pile—but the United States has caught up in number. Both nations now claim 171 systems in the ranking. And they are roughly equal in terms of raw computing power.
Both nations added new systems to tie in terms of the number supercomputers that rank. They are followed by Germany, Japan, France, and the United Kingdom. China holds 33.3 percent of the total in aggregate Linpack performance and the United States leads slightly with 33.9 percent.
Most of the top 10 supercomputers remained unchanged, with China’s Sunway TaihuLight still clocking in first at 93 petaflops and Tianhe-2 still second at 34 petaflops. Two new supercomputers joined the top 10: the Cori supercomputer at Berkeley Lab’s National Energy Research Scientific Computing Center—skating into the number 5 slot with 14 petaflops—and the Oakforest-PACS at Japan’s Joint Center for Advanced High Performance Computing—taking the number six slot with 13.6 petaflops. Others systems fell to make room, except for Piz Daint at the Swiss National Supercomputing Centre, which maintained the number eight position thanks to newly installed GPUs.
Since last November, the total performance of all 500 computers on the list is 60 percent higher—672 petaflops.
The top 10 supercomputers from the November 2016 Top500.org list.
Without us even knowing it, the connected devices in our homes and businesses can carry out nefarious tasks. Increasingly, the Internet of Things has become a weapon in hackers’ schemes. This is possible in large part due to manufactures’ failure to program basic security measures into these devices.
Now, experts in the U.S. are asking regulators to step in. Calls for public policy to improve device security have reached a fever pitch following a series of high-profile denial-of-service attacks leveraged in part by unsuspecting DVRs, routers, and webcams. In October, hackers flooded the Internet service company Dyn with traffic by assembling millions of IoT devices into a virtual botnet using a malicious program called Mirai.
A key 5G technology got an important test over the summer in an unlikely place. In August, a group of students from New York University packed up a van full of radio equipment and drove for ten hours to the rural town of Riner in southwest Virginia. Once there, they erected a transmitter on the front porch of the mountain home of their professor, Ted Rappaport, and pointed it out over patches of forest toward a blue-green horizon.
Then, the students spent two long days driving their van up and down local roads to find 36 suitable test locations in the surrounding hills. An ideal pull-off would have ample parking space on a public lot, something not always easily available on rural backroads. At each location, they set up their receiver and searched the mountain air for millimeter waves emanating from the equipment stacked on the front porch.
To their delight, the group found that the waves could travel more than 10 kilometers in this rural setting, even when a hill or knot of trees was blocking their most direct route to the receiver. The team detected millimeter waves at distances up to 10.8 kilometers at 14 spots that were within line of sight of the transmitter, and recorded them up to 10.6 kilometers away at 17 places where their receiver was shielded behind a hill or leafy grove. They achieved all this while broadcasting at 73 Gigahertz (GHz) with minimal power—less than 1 watt.
"I was surprised we exceeded 10 kilometers with a few tens of milliwatts,” Rappaport says. “I expected we'd be able to go a few kilometers in non-line-of-sight but we were able to go beyond ten."
The 73 GHz frequency band is much higher than the sub-6 GHz frequencies that have traditionally been used for cellular signals. In June, the Federal Communications Commission opened 11 GHz of spectrum in the millimeter wave range (which spans 30 to 300 GHz) to carriers developing 5G technologies that will provide more bandwidth for more customers.
Rappaport says their results show that millimeter waves could potentially be used in rural macrocells, or for large cellular base stations. Until now, millimeter waves have delivered broadband Internet through fixed wireless, in which information travels between two stationary points, but they have never been used for cellular.
Robert Heath, a wireless expert at the University of Texas at Austin, says the NYU group’s work adds another dimension to 5G development. “I think it's valuable in the sense that a lot of people in 5G are not thinking about the extended ranges in rural areas, they're thinking that range is, incorrectly, limited at high carrier frequencies,” Heath says.
In the past, Rappaport’s group has shown that a receiver positioned at street level can reliably pick up millimeter waves broadcast at 28 GHz and 73 GHz at a distance of up to 200 meters in New York City using less than 1 watt of transmitter power—even if the path to the transmitter is blocked by a towering row of buildings.
Before those results, many had thought it wasn’t possible to use millimeter waves for cellular networks in cities or in rural regions because the waves were too easily absorbed by molecules in the air and couldn’t penetrate windows or buildings. But Rappaport’s work showed that the tendency of these signals to reflect off of urban surfaces including streets and building facades was reliable enough to provide consistent network coverage at street level—outside, at least.
Whether or not their newest study will mean the same for millimeter waves in rural areas remains to be seen. Rappaport says the NYU team is one of the first to explore this potential for rural cellular, and he feels strongly that it could soon be incorporated into commercial systems for a variety of purposes including wide-band backhaul and as a replacement for fiber.
"The community has always been mistaken, thinking that millimeter waves don't go as far in clear weather and free space—they travel just as far as today’s lower frequencies if antennas have the same physical size,” Rappaport says. "I think it's definitely viable for mobile.”
Others aren’t convinced. Gabriel Rebeiz, a professor of electrical and computer engineering who leads wireless research at the University of California, San Diego, points out that the NYU group ran their tests on two clear days. Rain can degrade 73-GHz signals at a rate of 20 decibels per kilometer, which is equivalent to reducing signal strength 100-fold for every kilometer traveled.
“Rain at 73 GHz has significant, significant, unbelievable attenuation properties,” he says. “At these distances, the second it starts raining—I mean, misting, if it just mists—you lose your signal.”
Rebeiz says signals would hold up better at 28 GHz, only degrading 6 to 10-fold over a range of 10 kilometers. Millimeter waves will ultimately be more useful in cities, he says, but he doubts they will ever make sense for rural cellular networks: “It’s not going to happen. Period.”
George R. MacCartney Jr., a fourth-year Ph.D student in wireless engineering at NYU, thinks millimeter waves could perhaps be used to serve rural cellular networks in five or 10 years, once the technology has matured. One challenge is that future antennas must aim a signal with some precision to make sure it arrives at the user. That’s because millimeter waves reflect off of objects, and can take multiple paths from transmitter to receiver. But as for millimeter waves making their rural cellular debut in the next few years—“I'd say I'm a little skeptical just because you'd have to have a lot of small antenna elements and you'd have to do a lot of beamforming and beam steering,” he says.
By collecting rural measurements for millimeter waves, the NYU experiment was designed to evaluate a propagation model that the standards group called the 3rd Generation Partnership Project (3GPP) has put forth for simulating millimeter waves in rural areas. That model, known as 3GPP TR 38.900 Release 14, tries to figure out the strength of a millimeter wave signal once it’s emitted from a rural base station according to factors such as height of the cell tower, height of the average user, height of any buildings in the area, street width, and the frequency used to broadcast it.
The NYU group suggests that because this model was “hastily adopted” from an earlier one used for lower frequencies, it’s ill-suited to accurately predict how higher frequencies behave. Therefore, according Rappaport’s team, the model will likely predict greater losses at longer distances than actually occur. Rappaport prefers what’s called a close-in (CI) free-space reference distance model, which better fits his measurements. A representative of 3GPP was not available for comment.
In October, Rappaport presented the group's work at the Association of Computing Machinery’s MobiCom conference and their latest study will be published in the proceedings. In the meantime, it is posted to arXiv.
“It’s really big news,” says angel investor Chad Anderson, the managing director of Space Angels Network—one of Planetary Resource’s early investors.
Several companies, among them Planetary Resources and Deep Space Industries, plan on mining space for its riches. Planetary Resources aims to launch the first commercial asteroid prospecting mission by 2020. Near-Earth Asteroids are an untapped reserve of rocket fuel, materials, and minerals, the company explains on its website—although there is some disagreement over whether suitable asteroids are actually available.
In a press release yesterday, Planetary Resources announced that a deal with the Government of the Grand Duchy of Luxembourg and Société Nationale de Crédit et d’Investissement--a banking institution--had been finalized.
€12 million will be a direct capital investment and €13 million will come in the form of grants. Planetary Resources will set up a Luxembourg office, SNCI will take a public equity position in Planetary Resources, and an advisory board member of the SpaceResources.lu initiative will join the company’s Board of Directors, according to the release.
“Asteroid mining is an expensive endeavor,” Anderson says. “Getting this funding is a real benefit to their efforts.”
Planetary Resources and Paul Zenners, Luxembourg’s Ministère de l'Économie, did not respond to requests for further comment.
Amara Graps, a planetary scientist, asteroid mining advocate, and independent consultant for the Luxembourg Ministry of Economy who lives in Riga, Latvia, says “They can put more attention into the asteroid mining business” instead of getting “detoured” by monetary constraints.
Graps says the money is a runway for Planetary Resources to focus on characterizing and identifying proper asteroids as well as refining sensor and propulsion technologies, for example.
She did wonder if this announcement could potentially influence the decision of venture capitalists to invest because of a perceived risk of government involvement.
“Quite the opposite,” says Anderson.
He says the government is only an equity holder with the same stakes as other investors and it would not dictate how the business operates. Because the investment increases credibility, “I think this will encourage other investors to come on.”
Online voting is sometimes heralded as a solution to all our election headaches. Proponents claim it eliminates hassle, provides better verification for voters and auditors, and may even increase voter turnout. In reality, it’s not a panacea, and certainly not ready for use in U.S. elections.
In the face of such an adversary, the few online voting trials that have been carried out in the U.S. do not inspire confidence. In 2010, Washington, D.C. ran a pilot of an online voting system and invited security experts to try to breach the system. Hackers changed all the votes in fewer than 48 hours. The 2016 Utah GOP Caucus included an online voting option that was rife with procedural mistakes that prevented an estimated 10,000 Utahns from using the system.
Online voting has also been conducted during live elections in places like Estonia, Norway, and Australia. It is hard to know the degree of security attained in these elections, because vendors and officials have no incentive to disclose suspected breaches. However, independent researchers discovered vulnerabilities in both the 2015 New South Wales online election and in Estonia’s system in a 2013 study. Among the problems that were discovered: exploitable vulnerabilities in the connections between voters’ computers and election servers, as well as procedural and architectural weaknesses that could allow state-level attackers like Russia to manipulate entire elections.
Voting is an unusually difficult security problem, because officials must guarantee a correct result while simultaneously ensuring that voters’ choices remain private—and all without being able to trust any individual participants to act impartially. Furthermore, the election has to produce a result on election day, and we cannot delay voting or rerun the election if the system comes under attack. These requirements mean that traditional online security techniques, like those used to protect banking and commerce, are insufficient for elections.
Today, the vast majority of secure Internet communication takes place using Transport Layer Security (TLS), a cryptographic protocol in which vulnerabilities continue to be found. Three times in the past two years, researchers uncovered TLS flaws that could compromise up to one-third of popular sites. If an online voting system were among the susceptible sites, attackers might be able to intercept votes, discover how individuals voted, prevent votes from being cast, or even change votes.
For another sobering example of what might go wrong with online voting, look no further than the Mirai botnet attack which just last month interrupted access to many of the Web’s most popular sites. Had the target been an online election, large portions of the country would have been unable to vote.
Beyond these obstacles, an online voting system needs to securely authenticate voters’ identities. In Estonia—a country less populous than 41 U.S. states—this is accomplished using cryptographic chips embedded in every citizen’s national ID card which they scan using a card reader that they can attach to their laptops. We have no similar infrastructure in the United States, and a significant number of eligible voters lack any form of government-issued identification.
Overcoming these security challenges remains an area of active research. Computer scientists have proposed promising techniques for securing online elections based on advanced cryptography. It would let voters confirm that their votes were properly counted, without indicating to anyone else exactly how they voted. However, no technique has yet been demonstrated to be both practical enough for use by real voters and sufficient to protect against a well-resourced nation-state. There even remains considerable controversy amongst security and privacy researchers about what it means for an online election to be secure.
Even ignoring the security risks, the benefits of Internet voting are less certain than was once believed. Evidence from Estonia—including a 1.5 percent rise in overall voter turnout due to online voting—suggests that most voters would have cast ballots even without Internet voting. Internet voting seems to primarily make voting easier for those who vote already. What is certain is that online voting would make it easier for external players to tamper with elections.
In light of the uncertain benefits of voting online, it is crucial that we in the United States not rush to entrust our democracy to it. Some of the most difficult unsolved problems in computer security stand in the way: authenticating remote users, protecting home computers from malware, safeguarding online communication, preventing denial-of-service attacks, and protecting critical infrastructure from nation-state attackers. These challenges are among the most exciting and important in computer science and engineering—and many are striving to address them—but it may be decades, if ever, before they are solved to the level that we can vote online with confidence.
Robert Cunningham is chair of the IEEE Cybersecurity Initiative. Matthew Bernhard is a second-year computer science Ph.D student focused on security issues at the University of Michigan and tweets from @umbernhard. J. Alex Halderman is a professor of computer science and engineering at the University of Michigan and director of Michigan's Center for Computer Security and Society.
By Robert Cunningham, Matthew Bernhard and J. Alex Halderman | Posted
The project aims to achieve two goals: clearly marking addresses for autonomous vehicles, and reducing the energy and data storage costs of home surveillance systems. An early prototype mailbox attachment suggests that the trick, in both cases, may be radio-frequency identification.
Powered by an Arduino Yun processor, one component of the ADDSMART device controls a high-frequency 13.56-MHz RFID reader, USB camera, passive-infrared motion sensor, solenoid lock, and an onboard Wi-Fi module. The second component is an RFID tag.
Ray came up with the idea when she saw an Amazon ad for drones delivering packages. She wondered how that would be possible, as some of her regular mail still arrives at the wrong address.
In the United States, Amazon and Google and startups such as the Reno, Nev.–based Flirtey, are trying delivery via drones. One of a drone’s challenges is to home in on its destination. But accurately identifying addresses with standard GPS alone is really difficult, Ray says, because GPS uses latitude and longitude. The GPS sensor is good for identifying a location—but an additional system is needed for pinpointing a precise address.
Some approaches for tackling the location problem include computer vision techniques with cameras. Ray points out that even identifying addresses with human vision can be hard. At her house, the address is written on the pavement and “is not easily identifiable.” Then, Google Street View, which updates infrequently at best, doesn’t show that her neighbor’s house recently changed colors; and it wouldn’t even work so well for finding an address at night.
With an RFID tag on a home’s mailbox and an RFID reader on a drone or car, Ray believes that the delivery process could become relatively easy. The drone would use GPS to navigate to an address and then confirm that the address is correct by checking the RFID tag.
Once Ray decided to attach an RFID tag to a mailbox, she realized that RFID can do more than flag down drones: it offers security, too. An RFID-reader-equipped system could store a list of “safe” RFID tags whose possessors would be able to pass by a home or open the mailbox unimpeded.
Instead of a home surveillance system continuously checking for intruders, a video camera could save energy by starting to record only when an unrecognized vehicle or person passes the mailbox. The mailbox could also unlock when authorized users—such as a homeowner or mail carrier—arrive.
After soldering and wiring the necessary hardware for the smart mailbox and writing computer scripts for running commands, Ray and her student, Jonathan Ross Tew, tested the sensors indoors and outdoors.
When the motion sensor detected a change in passive infrared radiation—a type of electromagnetic radiation given off by anything warmer than about -270°C—the USB camera took a picture. Computer scripts sent the picture via email to a recipient and uploaded it to Dropbox.
Also, when an RFID tag was in the RFID reader’s limited detection range, the system checked whether the tag was marked with the homeowner ID or postman ID. In either case, it would open the solenoid lock, but the postman tag also triggered an email alert.
Ray said the passive-infrared motion sensor the team used pretty much failed outdoors—there were 931 false positives out of 937 tries. Using a more expensive sensor could help, she said.
Future work includes testing the system under various conditions and investigating the total area of surveillance coverage, technical interference with nearby smart mailboxes (like this Kickstarter project, Mr. Postman), security, and privacy.
So how will smart mailboxes flag down drones in apartment and condo complexes with cluster mailboxes? Ray told IEEE Spectrum that each individual mailbox could have its own RFID tag with an apartment number or post box number for flagging down drones. As for the surveillance function, Ray says a monitoring system similar to the one used in the prototype could monitor the door of each apartment.
Last Friday, I was in a van in Denver, Colorado with Zooko Wilcox the CEO of ZCash, a company that on 28 October will launch a new blockchain-based digital currency of the same name. On the floor next to me was a bunch of newly purchased computer equipment. I knew we were going to a hotel, but didn’t know where. I only knew that I’d be there for the next two days straight and that it would be my job to watch, ask questions, stave off sleep, and document as much as I possibly could.
That day began a cryptographic ceremony of sorts, one that will make or break a new digital currency. ZCash is identical to Bitcoin in a lot of ways. It’s founded on a digital ledger of transactions called a blockchain that exists on an army of computers that can be anywhere in the world. But it differs from Bitcoin in one critical way: It will be completely anonymous. Although privacy was a motivating factor for Bitcoin’s flock of early adopters, it doesn’t deliver the goods. For those who want to digitally replicate the experience of slipping on a ski mask and handing over an envelope of unmarked bills, ZCash is the new way to go.
On Friday, a series of distributed denial-of-service attacks hit Dyn, a company that provides a form of traffic control for popular websites, and interrupted some users’ access to sites including Github, Twitter, and Netflix. Since then, it has become clear that these attacks were made possible by security vulnerabilities in millions of devices within the Internet of Things.
On Monday at the National Cyber Security Alliance’s Cybersecurity Summit in New York City, industry leaders from security firms, Internet service providers, and device manufacturers fretted over the implications. Panelists spoke about the existential dangers that companies in the fast-growing IoT sector face if they continue to fail to secure these devices and debated ways in which the industry can improve security within this ecosystem.
“Friday showed us that the genie is well out of the bottle at this point,” said Andrew Lee, CEO at security company ESET North America. “This should probably be the wake-up call to manufacturers to start taking this seriously.”
While it’s still not clear who executed Friday’s attacks, Dyn has announced that hackers orchestrated it across “tens of millions” of IP addresses gathered through Mirai, malware that scans the Internet for connected devices with weak security. The malware then enlists these devices into a massive global network called a botnet. Increasingly, hackers have used these networks to launch distributed denial-of-service attacks, in which they instruct many devices to send traffic to a target at once in order to overload its capacity and prevent real users from accessing a website or service.
On Friday, multiple distributed denial-of-service (DDoS) attacks hit the Internet services company Dyn. The cyberattack prevented many users on the U.S. East Coast from navigating to the most popular websites of Dyn customers, which include Twitter, Reddit, and Netflix.
Dyn detected the first attack at 7:10 a.m. Eastern time on Friday and restored normal service about two hours later. Then at 11:52 a.m. ET, Dyn began investigating a second attack. By 2:00 p.m., the company said it was still working to resolve “several attacks” at once.
The interruptions inconvenienced many Internet users, and the daily operation of Internet giants in entertainment, e-commerce, and social media. There still aren’t many details available about Dyn’s predicament, and the company did not immediately respond to an interview request. But we do know from Dyn’s posts that the first two assaults on its network were DDoS attacks. Its customers’ outages again show that major Internet companies remain vulnerable to this common hacker scheme—one that has plagued networks since 2000.
A denial-of-service attack aims to slow or stop users from accessing content or services by impeding the ability of a network or server to respond to their requests. The word “distributed” means that hackers executed the Dyn attacks by infecting and controlling a large network of computers called a botnet, rather than running it from a single machine that they own.
Hackers can assemble a botnet by spreading malware, which is often done by prompting unsuspecting users to click a link or download a file. That malware can be programmed to periodically check with a host computer owned by hackers for further instructions. To launch an attack, the hackers, or bot-herders, send a message through this “command and control” channel, prompting infected computers to send many requests for a particular website, server, or service all at once. Some of the biggest botnets in history have boasted 2 million computers, capable of sending up to 74 billion spam emails a day.
The sudden onslaught of requests quickly gobbles up all the network's bandwidth, disk space, or processing power. That means real users can’t get their requests through because the system is too busy trying to respond to all the bots. In the worst cases, a DDoS can crash a system, taking it completely offline.
Both of Friday’s attacks targeted Dyn’s Managed Domain Name System. Through this system, Dyn provides a routing service that translates Web addresses that users type into a browser, such as spectrum.ieee.org. Users who type in a Web address are first sent through a Dyn server that looks up the IP address for a server that hosts the content the user is trying to reach. The Dyn server passes this information on to the user's browser.
To disrupt this process, says Sanjay Goel, a professor of information technology at the State University of New York (SUNY) at Albany, the bot-herders probably sent tons of translation requests directly to Dyn’s servers by looking up the servers’ IP addresses. They could have also simply asked the bots to send requests for Amazon.com and Twitter.com to cause similar issues. Attacking a DNS or a content delivery provider such as Dyn or Akamai in this manner gives hackers the ability to interrupt many more companies than they could by directly attacking corporate servers, because several companies share Dyn's network.
In Dyn’s case, it has built its Managed DNS on an architecture called Anycast, in which any particular IP address for a server in its system can actually be routed through servers in more than a dozen data centers. So, if the IP address of one server is targeted, 10 others may still be able to handle the normal traffic while it's beseiged with bot requests. Art Manion, a technical manager at Carnegie Mellon University’s Software Engineering Institute, says this system should make Dyn more resilient to DDoS attacks, and the company has touted it as highly secure.
Dyn said on Friday in an update to its website that the first attack mainly impacted services in the “US East.” The Anycast network includes data centers in Washington, D.C., Miami, and Newark, N.J., as well as in Dallas and Chicago, though it’s not clear whether these locations were specifically targeted.
Even in the affected region, only certain users experienced issues. One reason could be that other users' browsers had previously used Dyn to locate the specific server they needed to recover, say, Twitter.com. Because that information is now cached in their browsers, those users can bypass Dyn to fetch the desired content, so long as the servers that store Twitter’s website are still functioning.
Another reason for the inconsistent impacts could be that a common mechanism for handling DDoS attacks is to simply drop every fifth request from the queue in order to relieve the network of traffic. The result: Some requests from legitimate users wind up being dropped along with those from bots.
Once an attack begins, companies can bring backup servers online to manage the blizzard of requests. Victims can also work with Internet service providers to block the IP addresses of devices generating the most traffic, which means that they're likely part of the botnet. "You start blocking the different addresses where it's coming from, so depending on how massive the botnet is, it may take some time," says SUNY Albany's Goel.
Even with state-of-the-art protections and mitigation strategies, companies are limited by the amount of bandwidth they have to handle such sudden onslaughts. “Ultimately, Akamai has total x amount of bandwidth, and if the attacker is sending x-plus-10 traffic, the attacker still wins,” says Carnegie Mellon's Manion. “It mathematically favors whoever has more bandwidth or more traffic, and the attackers today can have more traffic.”
Dyn’s global network manages over 500 billion queries a month, so the culprits would have had to send many millions or even billions of requests simultaneously in order to stall it. Manion says that to prevent DDoS attacks, companies must address root causes such as poor IoT security, rather than scrambling to stop them once they’ve begun.
Modern computers still lack the capability to find the best solution for the classic “traveling salesman” problem. Even finding approximate solutions is challenging. But finding the shortest traveling salesman route among many different cities is more than just an academic exercise. This class of problems lies at the heart of many real-world business challenges such as scheduling delivery truck routes or discovering new pharmaceutical drugs.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.