Programmable matter isn't a thing that we have a lot of experience with yet. It's still very much a technology that’s slowly emerging from research labs. MIT is one of those research centers, and Basheer Tome, a masters student at the MIT Tangible Media Group, has been working on one type of programmable material. Tome’s “membrane-backed rigid material,” called Exoskin, is made up of tessellated triangles of firm silicone mounted on top of a stack of flexible silicone bladders. By selectively inflating these air bladders, the Exoskin can dynamically change its shape to react to your touch, communicate information, change functionality, and more.
If you look at your smartphone right now, there’s a good chance it’s covered in smudges. We’re not judging, just letting you know that those oily fingerprints are a security liability. Cybersecurity experts have shown it’s possible to read smudge patterns on a smartphone screen to determine which keys an owner presses most often. Knowing this, a hacker could guess a passcode with relative ease.
Miraculously (and disgustingly), these smudges often persist even after you slip your phone into your pocket or purse. Though smudge attacks haven’t been widely reported in real life, research on their feasibility highlights a potential weakness in mobile security. To fend them off, students from the Universidade Federal de Minas Gerais in Brazil, led by advisor Leonardo Oliveira have developed a security feature called NomadiKey (because the keys act like nomads, wandering across the screen).
NomadiKey shrinks the passcode entry keys on a locked smartphone screen to about one-fourth of their original size and scrambles them into a new arrangement every time a user tries to unlock their phone. By mixing up the keys, NomadiKey essentially distributes oily smudges more evenly across the screen, leaving a would-be hacker puzzled as to which keys a user actually pressed.
There is one major drawback to this design, however. Logging in with NomadiKey takes at least 1.5 seconds longer than typing in a PIN on a classic keyboard. Since heavy users unlock their smartphones up to nine times per hour, this delay can add up.
Artur Luis de Souza, a member of the NomadiKey team who is an undergraduate student studying cybersecurity, demonstrated the software last week at the IEEE International Conference on Communications in Kuala Lumpur, Malaysia. “People are more concerned about it being simple or easy to use than it being secure,” he admits.
Luis and his collaborators evaluated the security of NomadiKey against four other authentication methods. They tested the classic PIN code, an Android option that traces the pattern of a user’s finger across the screen, a random keyboard generator, and the new Knock Code system, by South Korean electronics company LG, that detects a specific sequence of taps anywhere on the screen.
As a measure of security, they compared the number of possible guesses it would take to unlock a smartphone using each authentication method if the phone were subjected to various hacks including smudge attacks. NomadiKey bested all except the random keyboard generator.
However, NomadiKey is unlikely to catch on if users aren’t willing to trade a bit of convenience for extra security. Case in point: iPhone users can set the length of their passcodes to be between four and six digits. More digits is inherently more secure. Still, one small study found that the average passcode spans just 4.5 digits (Apple has since changed its default passcode setting to six digits).
To make NomadiKey slightly easier to use, the scrambled design keeps each number in the same position relative to its neighbors. For example, the 1 always winds up to the upper left of the 5, and the 3 is always above the 6. The shrunken keys make it possible to obey this rule and still arrange the numbers in clumps scattered across the screen so that oily smudges are broadly distributed to obscure the true passcode.
To gauge usability, the team asked 18 people (mainly their friends and family members) to test their system against a classic keyboard and the random keyboard generator. The classic keyboard was by far the easiest and fastest to use, but NomadiKey was 40 percent faster than the random number generator. The students say it offers the best mix of security and usability of the methods they tested.
The group also noticed that over the course of unlocking their phones five times with NomadiKey, users logged their fastest speed on the fifth run. This means the delay may be partly due to a learning curve that users can overcome with time. “When people see it for the first time, it’s overwhelming and people are confused,” Luis says. “But over time, as you get used to it, it gets faster.”
Luis says that, in addition to smudge attacks, NomadiKey could protect against vision attacks in which hackers record a video of a user unlocking his or her phone. By subjecting this recording to digital pattern analysis, hackers can figure out where the user was touching the screen and make a reasonable guess at the PIN. Cyber experts who have carried out smudge attacks in the lab were successful at unlocking phones up to 92 percent of the time. Vision attacks were up to 91 percent effective.
The group has toyed with design elements for NomadiKey aimed at improving security and ease of use. In one version, each of the keys was wrapped in a colored band in an attempt to more clearly associate those that share a row (such as 4, 5, 6). But that just seemed to confuse people. At one point, they tested an iteration that required users to not only choose the right key, but swipe it in the correct direction. They quickly abandoned that idea, too.
Luis hopes NomadiKey can live on, even if it’s only ever adopted by a small number of zealots who are hyperconcerned about keeping their phones safe. Right now, the feature is not yet available to the public. The team has installed a prototype on a few phones but hopes to catch the eye of, say, a handset maker or a security team to further fund its development.
On Thursday, astronauts on board the International Space Station were scheduled to spend about 45 minutes inflating the BEAM (Bigelow Expandable Activity Module), a room made out of fabric designed to be blown up like a balloon with air from inside the ISS. It sounds like a simple enough process, but as with everything in space, it isn't, and it wasn't. After BEAM stubbornly refused to balloonify itself on Thursday after a couple hours of intermittent manual inflation, NASA decided to stop to try and figure out what was going on.
Today, the process resumed, and after about 8 hours of stop and go pressurization, BEAM has finally reached its final (and pleasingly round) shape.
At 9 a.m. GMT this morning, funding closed on an entity called The DAO. It’s a blockchain-enabled financial vehicle that’s structured kind of like a cross between Kickstarter and a venture capital fund and which now runs autonomously—no humans needed—on the fledgling Ethereum network. The DAO (short for decentralized autonomous organization) raised over US $150 million worth of the bitcoin-like cryptocurrency, Ether, during a feverish, 27-day sale.
The DAO’s launch is a feat that should surely stand out as a feather in the cap for the Ethereum network, as it is the most successful crowdfunding campaign yet documented anywhere, ever.
But yesterday, just hours before The DAO was scheduled to open for business and begin taking project proposals, three blockchain researchers published an article outlining multiple flaws in the governance structure of the organization that they say could be used as vectors for attack. The researchers are asking everyone involved with The DAO to temporarily halt funding activities and fix the critical problems.
“The attacks are quite real. So, somebody has to do something about them,” says Emin Gun Sirer, one of the authors of the article and of the blog where it was first published.
The DAO is the first iteration on the Ethereum network of an idea that has been floating around the crytpocurrency space for a few years now, which is that you could take all the functions of an investment vehicle—fund storage, project vetting and approval, fund disbursement, and profit allocation—and handle it on a blockchain, thereby creating what is effectively a corporation without jurisdictional anchors. Equally attractive to some is the fact that a blockchain-enabled organization is completely transparent and does not rely on a managerial class with high salaries to complete its functions. Everything is done by the code, which anyone can see and audit.
What investors who jump on board do rely on, however, is the expertise of the people who write and audit the code. They have to trust not only that the software is secure but also that the governance models work the way they are intended.
This second part is where Sirer and his co-authors, Vlad Zamfir and Dino Mark, say the DAO creators have failed.
Here’s a brief explanation of how The DAO is supposed to work. It’s first created as a contract written into an address on the Ethereum blockchain. The code for the contract specifies all the rules of the game. This was done by a few well-known people in the Ethereum community.
In order to play the game, you send Ether (the native currency on the Ethereum network) to the contract address and you get tokens back in exchange. These tokens signify your proportional ownership over the mass of Ether poured into the contract.
That period just ended. Now, in order to unlock the funds people will present project proposals and the DAO owners will vote on whether the projects are worthy of investment. For example, the same people who wrote the DAO contract are also planning to solicit investments from the organization to fund Slock.it, a project that is hell bent on decentralizing the sharing economy and replicating corporations like Uber and AirBnb as user-owned entities.
At first the voting sounds simple. But there are a few notable details that complicate any game theory analysis of the governance structure.
Voting is not a DAO participant’s only power. If I have DAO tokens, I can also decide to split from the larger DAO and create my own smaller one.
I can also sell my DAO tokens to anyone who will buy them.
If I vote on a proposal, I lose my right to split and I don’t get it back until the polls have closed. Nor can I sell my tokens while voting is in progess.
In order for a vote to count, a quorum must be reached. The size of the quorum depends on the amount of funds requested in the prposal.
There actually is a managerial class with very limited duties. There are 11 so-called “curators” who read proposals and vet them for basic flaws and scamminess. They also manage the status of the payment addresses on the funding proposals. In order for an address to recieve funding it must be whitelisted by the curators.
The DAO can vote to fire and replace curators.
It’s starting to sound a bit more complicated, isn’t it? I could go on. But the point here is that the voting apparatus has a lot of moving parts. According to Sirer and his colleagues, the machine has not been properly tuned to get the desired outcome.
“In general what you really want in any kind of a voting-governed structure like the DAO is you want the voters to vote their true preferences. You want them voting in line with what they want to see happen,” says Sirer. In other words, if a token-holder thinks that the proposal will yeild profits and increase the net worth of the DAO, he should vote yes. If not, he should vote no. But that’s not what we’re likely to see, according to the analysis.
“For a number of reasons it turns out that the mechanism encoded in the DAO is not in line with these principles. Certain people have incentives to behave in a strategic fashion,” says Sirer.
For example, Vlad Zamfir, one of the co-authors, who is also one of the curators for the DAO, points to a strong incentive not to cast negative votes in the organization. Anyone who votes on a proposal also loses the right to split apart from the DAO until the voting ends and the project in question is either discarded or funded. Zamfir argues that this amounts to a cost on no votes which increases the likelihood that people who would other wise vote no and stop a proposal from going through will instead wait out the vote and just split from the DAO if it doesn’t go the way they wanted. In this scenario, the yes voters get what they wanted. The people who were paying attention and disagree at least get to jump ship. It’s the people who didn’t vote and didn’t pay attention who lose the most, who are tugged along into bad projects, potentially ones that have been intentionally designed to profit only a fraction of the DAO owners.
“The people who don’t participate, the people who are just in it for the ride, who are non-active members of The DAO, they’re going to be the ones who get screwed by biases and vulnerabilities,” says Zamfir. “It’s the passive people, who are expecting this to go well because they trust Slock.it and the curators. But instead, the DAO as implemented today may just spend everyone’s money.”
The pro-yes voting bias is one of seven potentially critical scenarios that the authors outline in their paper. At the end they include options for how to fix each problem.
In order to move on a fix, The DAO would have to vote to write new code into a new Ethereum address and migrate all the funds. This would, of course, take time, which is the reason for the moratorium.
If a moratorium does take hold, it will be most immediately relevant to the Slock.it group, which has been drumming up support for a proposal that Stephan Tual, the COO of the company, says will request millions of dollars from the DAO.
In an interview on Thursday, Tual downplayed the severity of the DAO vulnerabilities. Regarding the voting bias, he said, “First of all it’s not in the realm of technical attacks because a technical attack would be—we broke your math and we can take stuff out of the contract. This is in the realm of social attacks. But who’s the attacker in this case? This is more a case of the governance model could be improved. Well, duh. Of course it could be improved. It will be improved and that’s the whole point.”
Tual argued that the DAO, regardless of the unexpected participation levels, is still an experiment and that, even more importantly, its fate is no longer in the hands of the people who created the code, but the people who hold the tokens.
Perhaps in concession to a growing chorus of concerned participants, Slock.it has outlined a proposal to the DAO to fund a permenant security team. But Tual says that the group will also go ahead with it’s originally planned proposal.
“We’ll see. It’s just a proposal. Anyone can go and make another proposal. That’s the beauty of the free market,” says Tual. “If we felt that there was a huge problem that we considered might happened, we would be the first to say “whoops, let’s do something about it. Let’s just address it. Let’s handle it.” But in this particular case, this is more like improvements than anything else,” he says.
If the curators chose not to whitelist the Ethereum addresses referenced in the funding proposals, then they can shut down the DAO until they are satisfied that the problems are fixed (although the DAO could always retaliate by firing them). This is what Gun, Zamfir and Mark argue is justified and are now pushing for.
“Basically, if there’s any whitelisting or proposals before the DAO changes code, then I will be very concerned. I think the current code has some pretty clear biases and problems with it,” says Zamfir.
On Thursday a jury in San Francisco ruled in favor of Google, its Android mobile operating system and by extension the 1.4 billion active Android devices around the world. The jury decided what could be a precedent-setting lawsuit filed by Oracle, owner of the Java programming language. Oracle’s suit claimed Android’s software development ecosystem, created for programmers to develop apps using Java, infringes Oracle’s copyright.
But the jury said to Oracle, essentially: No. Google, the jury concluded, enjoys so-called “fair use” protection for software access points to Java called application programming interfaces or APIs.
The jury’s verdict, so long as it withstands what Oracle said on Thursday would be an appeal, arguably opens the door further for developers to enjoy protected use of other companies’ APIs. And that, says one leading software copyright expert, is good news for creative software developers and for users of the millions of apps, programs, and interfaces they create.
Perhaps not so much a creative expression in themselves but rather a gateway for others to innovate and work in tandem with existing platforms, APIs are big business in some sectors of the software and online universe.
For instance, according to a study last year in the Harvard Business Review, companies and websites across the Internet offer up more than 12,000 APIs for programmers to use—as in the case of Android’s now vindicated use of 37 of Java’s APIs. For instance, the study says, Salesforce.com generates half of its revenue through APIs, while 60 percent of eBay’s revenue comes through its APIs. And, the study says, a whopping 90 percent of Expedia’s revenue comes in through its APIs.
According to Pamela Samuelson, intellectual property law professor at the University of California, Berkeley, what could have been a bad development earlier this week may now have an upside if the jury’s verdict withstands appeal.
She says she was taken aback when U.S. District Judge William Haskell Alsup's instructions to the jury simply accepted without question a previous finding on this case—that Java’s APIs should be considered protected under copyright law. As IEEE Spectrum has previously noted, Samuelson says there’s plenty of precedent in the U.S. courts that disputes or denies copyright claims on APIs.
But because the Oracle case accepted APIs’ copyrightability without question, the lawsuit also became a test lab of sorts to discover, as a fallback position, how much fair use protection software developers enjoy when using APIs, even if they are copyrighted.
The answer, it appears, is considerable.
“Google had to say, ‘The judge has instructed you, the jury, that the Java API is copyrightable; the question is whether or not Google made fair use of it,’” she says. In its arguments, Google held that its use of Java’s APIs was transformative and not just slavish or copycat. Establishing that one’s use of a copyrighted work is transformative is a pillar of any argument for fair use of that work. In siding with Google, the jury accepted the company’s argument that Android is a new and creative work that transformatively builds upon what Java started with.
“The closing argument was one in which the lawyer for Google was able to say: Look, they tried to make a phone with Java, but they failed,” Samuelson says. “We did so, but we put five years worth of effort into developing this wonderful platform that in fact has become this huge ecosystem that Java developers all over the world have been able get more of their stuff on because of this. Essentially, [Oracle’s] argument is sour grapes.”
Should Oracle follow through with Thursday’s claim that it will appeal, Samuelson says the fact that the jury rendered a (by definition unanimous) verdict narrows the window of Oracle’s options. In 2012, Alsup ruled once already in Google’s favor, and then Oracle appealed in 2013. The US Federal Circuit reversed Alsup’s ruling in 2014. Google then asked the U.S. Supreme Court to review the decision, which it denied. And that led to the current trial.
“It may be that calmer heads will lead to a decision to say, ‘If the jury found against us, it’s going to cost millions of dollars more,’” she says. “If the courts get tired of you appealing just because you lost, they can decide to order the other side, Oracle, to pay the attorney’s fees. The prospect that Google’s lawyers fees for litigating this case might have to be paid at least in part by Oracle could be another reason why they might say, ‘Enough is enough.’”
Regardless of how the current and perhaps final chapter in this lawsuit is concluded, she says it sets an important example that software companies and their lawyers can turn to when APIs come up in future software copyright lawsuits.
“While most of the decisions about APIs have been decided on copyrightability grounds, this case stands for the proposition now that fair use may be a viable way,” she says. “It’s sets a kind of precedent. Again how much of a precedent depends on what the Federal Circuit does [with Oracle’s possible appeal].
“But I think that if I talk to a software company about the importance of this case, it’d be that you can win on more than one point. Make multiple arguments here. Make the argument that that API is not protectable by copyright law, and there’s plenty of precedent for that. But also make sure you have a fair use claim in there. Because as long as you reimplement an API and independently write your code, you should be OK.”
The sales pitch for full duplex is a powerful one: these new radios could instantly double the capacity of today’s wireless networks by transmitting and receiving signals on the same frequency, at the same time. That promise has made network engineers eager to deploy it in cellular base stations and mobile devices ever since the technology began to pick up steam around 2007.
But in reality, transmitting and receiving messages at the same time on the same frequency has an unfortunate side effect. It causes twice as much interference as performing each function in turn or on separate bands. So while full duplex radios can dramatically improve spectrum efficiency, the resulting interference means more connections would be lost if a network were constructed wholly of them.
A lawsuit between Oracle and Google that went to the jury this week has been called the “end of programming as we know it” and the case that “will decide the future of software.” The media is probably hyperventilating again, says one legal expert, but real chilling effects could still stem from this strange but important legal dispute.
On Monday, lawyers for Google and Oracle presented closing arguments to a jury in San Francisco in the latest installment of a lawsuit that’s been in and out of courtrooms since 2010. At issue is Oracle’s claim that Google’s Android mobile operating system, purchased by Google in 2005, infringed Oracle’s copyright on the Java programming language, 37 of whose application programming interfaces (APIs) Android undisputedly uses.
Google says Sun Microsystems (Java’s creator and the copyright’s owner when Android was being developed) touted Java’s open APIs at the time. But Oracle, who acquired Sun in 2010, clearly doesn’t see Java as open to outside developers as its creator did. It’s only, Google says, after Java’s owners failed to develop their own line of Java smartphones that Oracle is now trying to elbow in on Android’s success.
“Oracle took none of the risk but wants all the credit and a lot of the money,” Google attorney Robert Van Nest said in his closing argument.
On the other hand, Oracle argued that copyright protection of Java APIs is not in question. Instead, Oracle lawyer Peter Bicks said the key point is whether Google enjoyed “Fair Use” protection of Java’s API. And on that score, he said, there was a “mountain of evidence” that Fair Use simply did not apply to Android’s use of the Java API. So, he said, the jury must find that Google violated Oracle’s copyright. And if the jury does side with Bicks, Google could face fines of as much as US $9 billion.
For everyone in software development—not just Google and Oracle watchers—this case could be significant, says Pamela Samuelson, Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley. Depending on the outcome, she says, it could alter how tech companies write, develop, and market their code.
APIs pervade software development today, she explains. From Amazon Web Services to Facebook to Apple to Google to countless interfaces between one software or hardware platform and another. Imagine a world in which big companies could descend on successful software products and reap rewards after the fact from a portfolio of claimed API copyrights.
A lot, Samuelson says, hinges on Bicks’ assertion that Java’s APIs are protected by copyright law. That appears to be technically true in this lawsuit, she says. But it is arguably not true in other courtrooms around the country. And that crucial legal distinction could make a difference both in the outcome of this lawsuit and its applicability to other lawsuits down the line.
Of course if APIs can’t be copyrighted, then Oracle doesn’t have much on which to rest its claim of copyright infringement. Google’s lawyer Van Nest likened the situation to putting the word “hamburger” on a menu and then claiming copyright on the word. “The API is ‘hamburger’ there, it’s the menu,” he said in closing arguments on Monday. Whereas, he argued, the creative expression (and, in this analogy, the copyrightable expression) comes in how the hamburger is sourced, developed, made, cooked, and served.
“If you’re a small startup and you’re reimplementing somebody’s API, the half-million dollars that a litigation might cost is a chilling effect,” Samuelson says. “Big companies can fight like this, but smaller companies have a tougher time.”
There’s a wrinkle in the Oracle v. Google case that enabled the judge and Oracle’s lawyers to simply claim APIs are copyrightable. That is, Oracle’s original complaint against Google involved alleged patent infringement as well as alleged copyright infringement. The patent infringement claim has since been disqualified, but its legacy remains.
Normally, Samuelson says, a copyright claim like Oracle’s would put it in line for a courtroom in the Second or Ninth District, which being the home districts of New York City, Los Angeles, and Silicon Valley, have the deepest copyright case law tradition to draw upon. These are the circuits in which, she says, any API copyright claim would face the hardest uphill climb.
However, because Oracle’s suit involved patents, she says, the case was instead routed to the court specializing in patent claims, the so-called Federal District Court. And so in a 2014 appeal, Oracle v. Google was argued in Federal Circuit court. This court, with its comparatively thin docket of legal precedents concerning software copyrights, ruled Java’s APIs were copyrightable.
So on one hand, it’s possible that even a strong finding for Oracle against Google could still have limited knock-on effects for other cases. Samuelson said the Second and Ninth Circuits’ caselaw disputing the copyrightability of APIs would remain in place regardless of the Oracle v. Google outcome. So even if Oracle prevails, a judge in the Second or Ninth Circuit might still be persuaded to treat the Oracle finding as an outlier.
On the other hand, savvy litigants might also add patent claims to any API copyright claim—which could then put the new claimant back in line for the same Federal District Court that ruled in favor of the copyrightability of Java’s API.
Thus even if the Second or Ninth Circuit would be friendly waters for a defendant in an API copyright suit, it also might not matter if it could be heard instead in the Federal District Court.
“This is the strangest Fair Use case I’ve ever seen,” Samuelson notes. Stay tuned, she adds, because whether Oracle or Google prevails, the decision will be worth hearing out.
Without a doubt, 5G is the hottest topic in wireless circles today. Many of the field’s most celebrated researchers and highest-paid executives are focused on forging this ultra-fast and high-bandwidth successor to 4G LTE. Among them, this opportunity to construct the next generation of wireless is often compared to Halley’s Comet: It comes around only once or twice in a person’s career.
5G enthusiasts say the widely heralded future wireless network will deliver lightning-quick mobile data speeds with virtually unlimited capacity, blanket cities with high-quality Internet access, provide low bandwidth IoT connections to billions of devices, and even enable autonomous driving. But the industry has only just begun to set standards that will define 5G’s capabilities and launch very early trials that will establish its parameters.
But in many cases, the term “5G” is bandied about as a panacea that already exists. That’s why Seizo Onoe, CTO of NTT DOCOMO, Japan’s largest mobile carrier, is traveling around to conferences trying to keep everyone’s expectations in check. “In the early 2000s, there was a concrete 4G technology but no one called it 4G,” Onoe laments. “Today, there are no contents of 5G but everyone talks about 5G, 5G, 5G.”
At first glance, Onoe may seem like an unlikely messenger. If 5G lives up to the hype, the world’s mobile carriers stand to benefit most from the new demand and services it will create. On the other hand, Onoe’s industry ties also make it within his best interest to keep his collaborators grounded in reality so 5G can be deployed as quickly and successfully as possible. “I want to right the direction for where 5G is going,” he says.
On Wednesday, Onoe presented a keynote at the IEEE International Conference on Communications in Kuala Lumpur, Malaysia. He sought to dispel some of the most pervasive myths about 5G. It was the second time in two months that he attempted to spread this message. In April, he gave the same talk to a group of industry professionals at the Brooklyn 5G Summit in New York City.
Here are a few of the falsehoods about 5G that Onoe is eager to debunk:
1. 5G will be a “hot spot” system
Many experts believe telecom operators will deploy 5G over so-called small cell networks. Unlike cell towers of the past that broadcast signals indiscriminately over a wide area, they envision new base stations being affixed to rooftops and lampposts to serve hyper-local areas. In theory, this design should provide better and faster coverage to those fortunate enough to live in said areas (mainly, cities in wealthy countries).
Onoe says this belief is an unfortunate self-fulfilling prophecy. By labeling 5G as a small cell or “hot spot” system at this stage, the industry is closing itself to other innovations. That’s a problem, he says, because such a “hot spot” system may not be so convenient to build in rural areas. Without a commercially viable strategy, the small cell structure of 5G could end up widening the digital divide.
Onoe says it would be better to keep an open mind to other technologies that could someday bring 5G to rural customers—or leave room for brilliant business models that could perhaps justify building far-reaching networks comprised of small cells. “At this point, I don't believe we can achieve that,” he says. “But in the past, [the industry ultimately] realized what I thought was impossible.”
2. 5G will require substantial investment
One of the boldest statements in Onoe’s speech was that deploying 5G will not require a ton of investment. This is counterintuitive to anyone listening to predictions for widespread deployment of cutting-edge technologies from massive MIMO to millimeter wave, or projections for the number of base stations required to build out a small cell network.
But rather than requiring a complete overhaul of existing networks as some imagine, Onoe believes 5G will be deployed largely on existing infrastructure. Better service, he insists, does not always correlate with greater capital expenditures. NTT DOCOMO’s 600 billion yen in capital expenditures last year marked a 15-year low, even as the data traffic across its networks grew 6300 percent since 2000.
In fact, Onoe actually expects capital expenditures for NTT DOCOMO to drop throughout 5G deployment, which he says would keep with trends for earlier wireless generations. To illustrate his point, Onoe opened a chart of the company’s capital expenditures over the past 20 years and asked the audience to guess when the company rolled out 3G and 4G LTE service. It’s impossible to tell based on expenditures alone. “For LTE, there was no increase in CapEx before the LTE launch,” he says. “That's a fact”
3. 5G will replace 4G
Another assumption that Onoe loves to challenge is that 5G networks will quickly render 4G obsolete. Not so, he says. The dominance of a new wireless network is more of an evolution than a sudden debut. "Of course this happens eventually but not overnight,” he says.
In this case, too, history is on his side. No wireless network has ever wholly replaced its predecessor, if only because there are so many areas of the world such as India where 3G and even 2G service is still the norm.
And though 5G promises perks that ride in on the coattails of high speed and capacity, there are plenty of cases where 4G networks will still be more than sufficient. For example, many IoT devices such as sensors may only need to transmit small amounts of data once every hour or day. These can operate on low bandwidth and do not require ultra-fast connections.
4. 5G will require more spectrum
There’s an oft-repeated line in the wireless world: With more smartphone users consuming more bandwidth per user, the portion of spectrum dedicated to mobile data is getting crowded—and we need more of it! But Onoe maintains that carriers can find plenty of existing spectrum to support 5G and free up more through re-farming, or the recycling of that which is currently dedicated to other uses.
To support his point, he again points to his experiences over more than 30 years in the industry. For example, he says, most people assumed 4G LTE service would require new spectrum, but NTT DOCOMO launched it in 2010 using only existing spectrum.
5. For 5G, everything will need something new
Many researchers and industry professionals are eager to find as many future uses for 5G as possible, and to enhance or expand existing services on the new network. Onoe insists that just because a new generation of wireless is in the works, it does not mean that it can or should serve every possible need under the sun—whether it’s autonomous driving, IoT, or mobile broadband service. “This is the most frustrating to me,” he says.
He admits to feeling a bit of déjà vu, with today’s hype reminding him of conversations about how 4G would suddenly enable new technologies and services. At the end of the day, says Onoe, 5G will eventually deliver on many of the promises that the industry has dreamt up—and possibly even a few others it has yet to consider. But it’s just too early, he says, for the industry to tout it as the path to so many potential futures.
Editor’s note: This post was corrected on May 27 to reflect NTT DOCOMO’s capital expenditures in billions of yen instead of American dollars.
Barak Ariel, a lecturer at the University of Cambridge’s Institute for Criminology, wrote last month for IEEE Spectrum about his studies of police body cameras. He described there the startlingly good outcomes in the first large-scale trial of police body cameras, which he and two colleagues carried out in Rialto, Calif., in 2012 and 2013. That study indicated that these cameras reduce both the frequency with which officers resort to using force and the frequency with which citizens register complaints against officers.
Ariel also shared in that article some newer results from a wide-ranging set of trials testing the effects of these cameras on the police use of force—results that would temper anyone’s enthusism for these cameras. You see, in a few of the trials, the use of force by police officers seemingly went up when they were wearing cameras. I say “seemingly” because it’s impossible to tell whether the use of force actually went up or if the cameras merely cause there to be more reports of force being used by officers. And this result wasn’t consistent: In some places police use of force went down when cameras were worn; in others it stayed about the same. In any case, it was a troubling finding.
Now Ariel and his colleagues have some even more disappointing news, which has just been published in the European Journal of Criminology. It seems that when officers wear body cameras, they are more likely to be assaulted. Now that’s strange. You’d imagine that wearing these cameras could only do the opposite.
Despite changes to the law, the U.S. National Security Agency can still request metadata from tens of thousands of private phones if they are indirectly connected to the phone number of a suspected terrorist, according to a new analysis. The study is one of the first to quantify the impact of policy changes intended to narrow the agency’s previously unfettered access to private phone records, which was first revealed by Edward Snowden in 2013.
For years before Snowden went public, the U.S. National Security Agency legally obtained metadata not only from suspects’ phones but also from those of their contacts and their contacts’ contacts (and even their contacts’ contacts’ contacts) in order to trace terrorist networks. This metadata included information about whom a user has called, when the call was placed, and how long these calls lasted.
Today, federal rules permit the NSA to recover metadata from phones within "two hops" of a suspect, which means someone who called someone who called the suspect in the past 18 months. Previously, federal regulations were more generous, permitting recovery of metadata from "three hops" away dating back to five years.
A new analysis led by researchers at Stanford University’s Computer Security Laboratory quantifies just what this policy change has meant, discovering that, under the old five-year three-hop rules the NSA could legally recover metadata from about 20 million phones per suspect and “the majority of the entire U.S. population” if it analyzed all its suspects. Now, the stricter 18-month "two hop" rule permits the agency to recover metadata from about 25,000 phones with a single request, according to the Stanford study.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.