A smart contact lens fitted with an artificial iris could help people with eye injuries and congenital diseases see better. The lens, described this week at the International Electron Devices Meeting in San Francisco, uses concentric LCDs to mimic the expansion and contraction of the pupil that’s normally controlled by the iris.
The artificial iris is part of a larger project on smart contact lenses led by Herbert De Smet, a professor who works on intelligent sensors at the University of Ghent. De Smet’s group is working on putting many electronic components onto these lenses, including batteries, antennas, control electronics, and chemical sensors.
The lens presented at the San Francisco meeting is aimed at helping about 200,000 people who suffer from problems with the iris, whether due to cancer, an acute injury, or genetics. The iris is the colored part of the eye surrounding the pupil. It contracts under bright light to protect the retina from, say, the rays of the full sun, and expands in low light to help us see better. When the iris is absent or damaged and therefore can’t contract, being out in the sun or just under bright indoor light is painful.
The usual solution is to wear dark sunglasses or a dark contact lens, says Florian De Roose, a researcher at Imec in Leuven, Belgium. But it’s difficult to see in low light when wearing sunglasses. And people with damaged irises may find that daylight is still too bright despite wearing tinted lenses. A contact lens that darkens to block out light and effectively constricts the pupil could help people to see better.
De Smet is collaborating with De Roose and other researchers to make parts for the artificial iris system. De Smet’s group has already integrated liquid crystal cells onto contact lenses; De Roose worked on adding flexible control electronics. On the lens, three concentric LCDs surround a clear central area that sits above the pupil. In bright light, all three LCDs can be activated, causing the artificial iris to contract and narrow the opening. In dim light conditions, all the LCDs are turned off, and the artificial iris expands to let more light in. Around the iris are ten organic solar cells and control electronics containing a driver for each of the three LCD rings.
De Roose’s Imec group worked on flexible, low-power driver electronics that take up about 0.75 square millimeters. They used thin-film transistors, based on transparent IGZO, built on a flexible polymer. The control chip is placed at the edge of the lens so that it doesn’t occlude vision. However, the completed chip has a transparency of about 50 percent, so it could be made larger. The system draws 25 microwatts—which the onboard photovoltaics should be able to supply.
So far, all the parts have yet to be integrated. The collaborators have shown that they can build LCDs, solar cells, and drivers on the lens and that the driver can control the LCD; now they have to show that the full system can operate together with the solar cells. In future systems, says De Roose, the photovoltaics will act both as the light-level sensors and the LCD power source. “The beauty of this is, the more light there is available, the more power there will be to drive the LCDs” and to make the iris contract, says De Roose. He also notes that while the group hasn’t focused on aesthetics, organic photovoltaics can be made in colors that could look relatively natural.
The smart contact lens project faces broader challenges. Such lenses must do more than carry workable sensors and display elements. They must also be carefully mechanically engineered. The lenses themselves are stretchy, but the transistors are merely flexible. The researchers will have to account for this mismatch, either by moving to stretchy materials or being very careful about the smart lens architecture. And more importantly, they must ensure that these lenses are safe. One way they’ll do that is by ensuring that the electronic components don’t interfere with the transfer of water and oxygen through the lens to the cornea. Otherwise, the lens could cause infections.
Working with the Korea Institute of Science and Technology (KAIST), NASA is pioneering the development of tiny spacecraft made from a single silicon chip that could slash interstellar exploration times.
On Wednesday at the International Electron Devices Meeting in San Francisco, NASA’s Dong-Il Moon will present new technology aimed at ensuring such spacecraft survive the intense radiation they’ll encounter on their journey.
If a silicon chip were used as a spacecraft, calculations suggest that it could travel at one-fifth of the speed of light and reach the nearest stars in just 20 years. That’s one hundred times faster than a conventional spacecraft can offer.
“How would you feel about donating your phone to science?”
When Zooko Wilcox posed this question to me in October, what I heard was: Can I take your phone and hand it over to a hacker to riffle through its contents and sniff all over your data like a pervert who’s just opened the top drawer of a lady’s dresser?
At least, that’s how it felt.
“I think I’d rather donate my body,” I said.
What Wilcox really wanted to do with my phone was to run forensic analysis on it in the hopes of determining whether someone was using it to spy on us. Wilcox is the CEO of a company called Zcash which designed and recently launched a new privacy-preserving digital currency of the same name. On the weekend he asked for my phone we were both sitting with a two-man documentary film crew in a hotel room stuffed with computer equipment and surveillance cameras.
A secret ceremony was underway. Before the company could release the source code of its digital currency and turn the crank on the engine, a series of cryptographic computations needed to be completed and added to the protocol. But for complex reasons, Wilcox had to prevent the calculations from ever being seen. If they were, it could completely compromise the security of the currency he had built.
Over the course of the two-day event, everything went pretty much as planned. Everyone and everything did just what they were supposed to do, except for my cellphone, which in the middle of the event exhibited behaviors that made no sense at all and which planted suspicions that it had been used in a targeted attack against the currency.
The story of Zcash has already been roughly sketched by me and others. The currency launched 28 October onto the high seas of the cryptocurrency ecosystem with a strong wind of hype pushing violently at its sails. On the first morning that Zcash existed, it was trading on cryptocurrency exchanges for over US $4000 per coin. By the next day, the first round of frenzied feeding had subsided and the price was already below $1000. Now, a month later, you’ll be lucky if you can get $100 for a single Zcash coin. Even in the bubble-and-burst landscape of cryptocurrency trading, these fluctuations are completely insane.
Some hype was certainly warranted. The vast majority of digital currencies out there are cheap Bitcoin imitations. But the same cannot be said of Zcash. The project, which was three years in the making and which combines the cutting edge research of cryptographers and computer scientists at multiple top universities, confronts Bitcoin’s privacy problems head on, introducing an optional layer of encryption that veils the identifying marks of a transaction: who sent it, how much was sent, who received it. In Bitcoin, all of this data is out in the public for anyone to see.
However, with digital currencies, everything is a trade-off, and the improvement in privacy that Zcash brings comes with a risk, one that has gotten much less attention since the currency launched. Obscuring data on the blockchain inevitably complicates the process of verifying the validity of transactions, which in Bitcoin is a simple matter of tracking coins on a public ledger. In Zcash, verifying transactions requires some seriously experimental computation, mathematical proofs called zk-SNARKS that are so hot-off-the-presses that they’ve never been used anywhere else. In order to set up the zk-SNARKS in the Zcash protocol, a human being must create a pair of mathematically linked cryptographic keys. One of the keys is essential to ensuring the proper functioning of the currency, while the other one—and here’s the big risk—can be used to counterfeit new coins.
If it’s not immediately clear how this works, you’re in good company. The number of people who really understand zk-SNARKs, and therefore the Zcash protocol, is probably small enough that you could feed them all with one Thanksgiving turkey. The important thing to get is that, given the current state of cryptographic research, it’s impossible to create a private, reliable version of Zcash without also simultaneously creating the tools for plundering it. Let’s call those tools the bad key.
Prior to launching Zcash, the developers who invented it had to create the bad key, use it to make a set of mathematical parameters for the zk-SNARKS (the good key), then dispose of the bad key before any nefarious individual could get hold of it. And they had to do it all in a way that was both secret enough to be secure yet public enough that anyone who wanted to use Zcash felt well-assured of the technology’s integrity.
The Zcash developers, whose work is funded by over $2 million raised from private investors in the Zcash Company, chose a strategy that relied heavily on the secrecy part of this equation. Nearly everything about the ceremony—where and when it would be held, who would be involved, what software would be used—was kept from the public until a blog post about it was published this afternoon.
Instead of building real-time transparency into the ceremony design, the Zcash team opted to meticulously document the event and save all artifacts that remained after the bad key was destroyed. This evidence is now available for analysis to prove the process went as it was described.
As an extra measure, they decided to invite a journalist to bear witness—me.
Two weeks before the ceremony, I got a vague invite on Signal, an encrypted messaging app, from Wilcox without any specifics about what to expect. A week later he told me where I would have to go. And a week after that—two days before the ceremony—I was told when to arrive. On 21 October, I walked into a coffee shop in Boulder Colorado where I met up with Wilcox and a documentary filmmaker who had been hired to get the whole thing on tape. From there we headed to a computer shop in Denver to buy a bunch of equipment and then returned to a hotel in Boulder, where I stayed for the next three days.
The headquarters in Boulder was one of five “immobile” stations, all of which were participating in the ceremony from different cities across the planet. One mobile station was doing its part while making a mad dash across British Columbia. The generation of the keys was decentralized such that each station would only be responsible for creating a fragment of the bad key. For the ceremony, a cryptographic algorithm was custom designed that created a full version of the zk-SNARK parameters while keeping the pieces of the bad key segregated, a process that took two days of relaying data back and forth among the six stations.
I’ll hazard an analogy in order to explain more generally how this works: Let’s say you have a recipe and you want to use it to make a single cake that is going to feed everyone in the world and that’s the only cake that anyone is allowed to eat, ever. You have to have a recipe to bake the cake, but you also have to make sure no one can ever make it again. So you split the recipe up into six parts and you design a baking process that allows each participant to add their ingredients and mix them into the batter without the others (or anyone else) seeing what they’re up to. After pulling the cake out of the oven, you burn all the pieces of the recipe.
In this analogy, the recipe is the bad key; the cake is the zk-SNARK parameters; and the person hiding the ingredients and doing all of the mixing is a cryptographic algorithm.
The way this looks in practice is that each station has a computer storing a fragment of the secret. That computer can’t connect to the Internet, has been stripped of its hard drive, and runs off a custom-built operating system. The secret never moves off the computer but it is used in a series of calculations that are then copied to write-once DVDs and carried to separate, networked computer that shares the results with the rest of the stations. Each station builds off the results of the station before it in a computational round robin until the process is complete and the software finally spits out a product.
The benefit of dividing up the work in this way is that no one participant can compromise the ceremony. Each fragment of the bad key is worthless unless it is combined with all the others. It cannot even be brought into existence unless all members of the ceremony collude or an attacker successfully compromises all six of the participating stations.
As an observer, there was very little I could do to verify the security of the events as they unfolded in front of me. I don’t have the advanced cryptography coursework that would be necessary to audit the software that Wilcox and the other station operators were running. And even if I did, the code had not yet been made available for public review. My role, as I saw it, was simply to be present and make sure the people involved did all the things that they would later tell people they did. I can bear witness to the fact that the computer storing the key fragment was bought new, that the wireless card and hard drive were removed, that while I was watching no attacker sneaked into the hotel room to mess with the equipment, that all of the DVDs were correctly labeled, and that the RAM chips that stored the key fragment were smashed and burned in a fire pit after the ceremony.
I can testify that nothing strange happened. Until it did.
During the ceremony most of the station operators were talking with each other on a Google Hangout. On the evening of the first day, after getting up from a bit of a rest, Wilcox wandered over to the laptop that was running the Google Hangout and began chatting with Peter Van Valkenburgh, a station operator located in Washington D.C. We noticed an echo of the audio coming from across the room and started looking for its source.
The whole place was filled with gadgets. Four security cameras had been hoisted onto poles and aimed at the offline computer to provide 24 hour surveillance in the event of a ninja attack. Another digital camera on a tripod was capturing a wide angle shot of the room. Both Wilcox and I were geared up with wireless mics. And another mic was secured to the laptop running the Google Hangout.
I went over to a monitor that was set up to display the security footage between the two hotel beds, and at first I thought that was it. Then I looked down at one of the beds and saw my phone lying there, When I picked it up I immediately realized that the audio was blaring out of the speaker.
“Morgen, why is your phone playing the audio from our Google Hangout?” asked Wilcox, bemused, curious, and slightly alarmed.
Why indeed. It was especially strange because I had not knowingly connected to the Google Hangout at all during the ceremony. Furthermore, footage of Wilcox’s computer screen shows that I wasn’t listed as a participant.
So, how was my phone accessing the audio?
Without wasting any time, Wilcox began experimenting. While continuing to talk to Van Valkenburgh, he muted the microphone on his Google Hangout session and then turned it back on. When he did that, my phone only picked up Van Valkenburgh’s audio.
Stranger still, when Wilcox re-enabled his hangout microphone, his voice came through my phone with a slight lag—maybe 100-200 milliseconds—indicating that my phone was picking it up from somewhere outside the room, perhaps from a Google Hangout server.
Just as we started to examine my phone, looking at the programs that were running and a few suspicious text messages that I had received a couple days before the ceremony, the echo abruptly stopped. We quickly put it into airplane mode hoping to preserve whatever evidence remained.
After much negotiating, I surrendered my phone (an archaic Android that was ripe for the hacking) to Wilcox. He has since passed it off to a hacker in San Francisco. Those efforts have produced no evidence about what caused my phone to turn on me, and it’s now on its way to a professional security firm for further analysis.
Unless we find evidence of malware on my phone, the question of how it may have impacted the ceremony is completely hypothetical. Assuming my phone was hacked, who would want to break into the Zcash ceremony? And if an attacker did have full control over my phone, which was powered on and present until the moment it started misbehaving, what could that person do with it?
For answers, I traveled up to Columbia University to the lab of Eran Tromer, a computer scientist at the Zcash company who co-invented its cryptographic protocol. Tromer is at Columbia for a year as a visiting researcher, but his home base is the Tel Aviv University School of Computer Science where he is a member of the faculty and the director of the Laboratory for Experimental Information Security (LEISec) at the Checkpoint Institute for Information Security.
A big part of Tromer’s work at LEISec involves investigating side channel attacks. The idea behind side channel attacks is that you don’t have to have direct access to a computer’s data in order to spy on it. Often, you can piece together some idea of what a computer is doing by examining what’s going on with the physical components. What frequencies are humming across the metal capacitors in a laptop? How much power is it pulling from the wall? How is the voltage fluctuating? The patterns in these signals can leak information about a software program’s operation, which, when you’re running a program that you want to keep secret, can be a problem.
“My research is about what happens to good, sound, cryptographic schemes when they reach the real world and are implemented on computing platforms that are faulty and leaky at the levels of software and hardware,” says Tromer.
In his lab at Columbia, Tromer opened his laptop and ran a demonstration program that executes several different computations in a loop. He told me to put my ear down close to where the fan was blowing out hot air from the computer’s innards. I leaned over, listened carefully and heard the computer whine ever so slightly over and over.
“What you’re hearing is a capacitor in the power supply, striving to maintain constant voltage to the CPU. Different computations done on the CPU have different power draw, which changes the mechanical forces on the capacitor plates. This causes vibration, which in turn are transmitted by the air as sound waves that we can capture from afar,” he says.
Tromer started investigating this phenomenon, called “coil whine,” for himself about ten years ago. “I was in a quiet hotel room at a conference. I was working on my laptop and it was making these annoying whining noises whenever I ran some computation. And I thought, let’s see what happens if the computation is actually cryptographic calculation involving a secret key, and how the key affects the emitted noise.”
Tromer and his colleagues spent the next decade trying to use acoustic leakage from computer hardware components to spy on cryptographic algorithms. In 2014, they demonstrated a successful attack in which they were able to steal a decryption key from a laptop by recording and analyzing the sounds it made as it ran RSA decryption software. With a high tech parabolic microphone, they were able to steal the secret from ten meters away. They were even able to pull off the same attack using the internal microphone on a mobile phone, provided that the device was snuggled up close to the computer.
However, for various reasons Tromer doesn’t think anyone could have used the same strategy with my phone. For one thing, the coil whine in modern computers occurs at higher frequencies than the one he demonstrated—in a range that is typically outside what a mobile phone, which is designed for the lower frequencies of the human voice, can detect.
“It seems extremely unlikely that there would be exploitable signals that can be captured by a commodity phone, placed in a random orientation several feet away from a modern computer,” he says. “It is not completely unthinkable. There might be some extremely lucky combination. But it would be a very long shot, and at a high risk of detection, for an adversary to even try this, especially since the ceremony setup gave them very little time to tailor attacks to the specific hardware and software setting.”
Moreover, the attacks that Tromer has demonstrated are not passive. In order to collect a useful signal, you have to amplify it by sending challenges to the software that you are attacking. The challenges force the software to repeat computations. In order to do this, you have to know and have studied the code that the computer is running.
The software that was running during the Zcash key generation ceremony was all custom built specifically for that occasion and was intentionally kept from the public until the ceremony was over. The choice to do this was controversial and the approach strays from that of other similar ceremonies. (For example, the DNSSEC ceremony, which generates the digital signatures that secure top level domain names, is done in a much more transparent ceremony that gets publicly audited in real time.)
Before flying to Colorado, I contacted Bryan Ford, a computer science professor who directs the Decentralized and Distributed Systems Laboratory at the École Polytechnique Fédérale de Lausanne in Switzerland. He was troubled by the decision to keep the details of the Zcash ceremony secret. In a series of Twitter direct messages he told me:
“I understand the crypto principles that the parameter-generation is supposed to be based on well enough to know that nothing *should* need to be kept secret other than the critical secret parts of the parameter keys that eventually get combined to produce the final public parameters. If they think the ceremony needs to be kept secret, then...something’s wrong.”
By keeping the details of the ceremony software secret, the Zcash team limited their security audit to just a handful of people inside the company, but they may also have made it more difficult for an attacker to make the kinds of preparations that would be necessary to mount a successful side channel attack.
Even if someone did get a look at the source code in advance, Wilcox says it wouldn’t be the end of the world because secrecy was not the primary defense. According to him, one of the best aspects of the ceremony design was the use of multiple parties. It wouldn’t be enough to pull recordings off the computer in Colorado. An attacker would have to successfully record a side channel at each station. And because Wilcox left many of the security details up to the personal discretion of each station operator, the craftwork that would go into designing six unique side channel attacks would cost a huge amount in both time and money.
At one of the stations it may even have been impossible. Peter Todd, one of the ceremony participants, ran all of his computations on a laptop encased in a tin foil-lined cardboard box, while driving across Canada. He then burned his compute node to a crisp with a propane torch. “It was my goal to outdo every other station in Canadian cypherpunk glory,” says Todd, who also happens to be one of Zcash’s most outspoken critics.
If someone did attempt a side channel attack with the strategies Tromer has demonstrated in his lab, then there would likely be evidence of it in the trove of forensic artifacts that the ceremony produced. Among those items are all of the write-once DVDs that provide a record (authenticated by cryptographic hashes) of what computations were being relayed between the stations in the ceremony. Tromer’s techniques require direct interaction with the software and those manipulations would make their way onto the public record.
At no point did the incident with my phone stop the ceremony. Nor did Wilcox seem terribly concerned that it posed a serious threat. “We have super great security. I’m not worried about preventing some kind of attack. But I’m very interested in figuring it out, or experimenting, or extracting more evidence,” said Wilcox. “They’re very far from winning. So far from winning,”
And I’m curious too. Right now my phone is somewhere, I know not where, awaiting its strip down. Even if it wasn’t used to topple a privacy-guaranteeing digital currency—which, judging from everything I’ve learned, would have been a technological miracle—it’s still quite likely that someone was on it listening to me. Who? Why? For how long? If anything, this experience has deepened my respect for the people who are trying to make it easier to keep our private information private. And at the very least, I’ve learned a lesson: when you get invited to a super-secret cryptography ceremony, leave your phone at home.
Standing on the flat roof of a data center at an undisclosed location in Boston, a shivering Chet Kanojia gestures toward a sleek white box about the size of a piece of carry-on luggage. This is the proprietary base station that the seasoned startup founder believes will change the way the world receives its Internet and liberate frustrated customers from the iron grip of legacy providers such as Comcast and Time Warner.
The white box mounted on a pole before us is called a Starry Beam. Only about a dozen of these custom base stations exist in the world right now. The team at Kanojia’s newest startup, named Starry, has spent the past 20 months perfecting the base station’s design. Its performance so far on this nondescript rooftop has persuaded Kanojia that the Internet of the future will not be delivered through expensive fiber optic cables laid in the ground, but beamed over the air using high-frequency millimeter waves.
Standing beneath the Starry Beam, Kanojia points past a spate of warehouses lined with leafy trees to an apartment complex about a kilometer away. There, jutting out from the window of an apartment that Starry has rented, is a white spherical device called a Starry Point. Starry Beams broadcast millimeter waves to Starry Points, which convert them to lower frequencies that flood the home so users can stream ultra-high-definition 4K TV shows to their hearts’ content.
This method, Kanojia believes, can offer much faster service to customers for far less money. In January, he shared that vision at Starry’s launch party in New York City. The company has been quiet ever since, but executives now say their first beta, which has been underway since late August, has confirmed their basic premise—that millimeter waves can deliver ultra-fast broadband speeds up to 1 gigabit per second to customers over the air.
Based on these early results, Starry anticipates average user speeds on its yet-to-be-built network will be as fast as any broadband connection available today—somewhere in the range of 200 to 300 megabits per second. For comparison, the average broadband network in the United States offers average download speeds of just 55 megabits per second.
Right now, Starry’s beta is only measuring the performance of this original Starry Beam that serves a handful of users. In the first quarter of 2017, the company will launch an open beta and build its test network out to a half dozen sites capable of serving several hundred users. Starry has also received permission from the U.S. Federal Communications Commission to run tests in 14 other cities including New York, Dallas, Seattle, San Francisco, and Chicago.
Because Starry CTO Joe Lipowski says the start-up doesn’t plan to publish the results of the beta, it’s hard for anyone to independently evaluate the company’s claims. Starry has not released any press releases about its progress, and Kanojia has also kept the details of his fundraising under wraps. “The less people know about our performance, the better it is for us,” he says.
That attitude has left outsiders wondering what to think of the company’s prospects in such a highly competitive market. “On the surface, the technology sounds like it's sufficient to do what they need it to do,” says Teresa Mastrangelo, a longtime wireless analyst with Broadbandtrends LLC. “We haven’t really seen anything at a big scale. I'll be curious to see how it goes when we're looking at tens of thousands of subscribers.”
If the company can successfully scale, Starry could rewrite the story of what it means to provide high-speed Internet service to homes and businesses. Millimeter waves are high-frequency radio waves that occupy a section of the electromagnetic spectrum that has never been used for consumer technologies. While WiFi, Bluetooth, and cellular carriers have operated on frequencies below 6 gigahertz, Starry is currently testing its technology at 38.2 GHz and 38.6 GHz (where waves are much shorter in length), with future plans to broadcast at 37 GHz and 40 GHz.
Millimeter waves offer several advantages over those delivering cellular data wirelessly on 4G LTE networks and even those carrrying broadband Internet service that is piped to homes through fiber. First, there is a lot more open bandwidth in the millimeter-wave range than there is at lower frequencies crowded with signals from smartphones, microwaves, and WiFi devices. And Starry thinks sending the Internet over the air to consumers will be much cheaper than digging up the ground to lay cables.
In fact, Kanojia estimates that Starry can build out a wireless network that costs only $25 for every home it serves in areas with a population density of at least 1,500 homes per square mile. Installing fiber networks typically costs $2,500 per home. Kanojia thinks the company can make money with market penetration as low as 3 to 5 percent, whereas fiber deployments sometimes require up to 65 percent penetration to be profitable.
One factor that will likely work in Starry’s favor is the range and agility of its Starry Beams. Kanojia says these base stations can deliver superfast Internet service to any customer within 1.5 kilometers who also falls within “near-line-of-sight” of a Starry Beam. That’s an important finding because millimeter waves are often presumed to perform best at shorter distances when there is a clear path between a base station and the end user—and such a direct route can be difficult to find in cities. Millimeter waves can’t easily penetrate windows or buildings, or maneuver around objects like traditional cellular signals can. They are also prone to degrade over longer distances when passing through foliage or rain.
To work around that, Starry equipped each Starry Beam with four active phased arrays, which are rows of tiny antenna elements that cooperate to point and amplify signals in precise directions. With these arrays, a base station can transmit signals more rapidly and with more precision than traditional antennas. In practical terms, this means the Starry network can serve Starry Point receivers mounted on the sides of a building from the same base station that serves those in front by bouncing signals off of buildings and other reflective surfaces. “Our measurements have shown that there’s tremendous reflections,” Lipowski says. Even at what they call “extreme non-line-of-sight” conditions, they’ve delivered data rates of 200 Mb/s to beta users.
Based on these results, Kanojia thinks Starry can provide broadband service with a deployment model similar to existing LTE networks: renting space on existing rooftop cell towers through companies such as the American Tower Corporation. To cover all of Boston, which measures about 230 square kilometers, Kanojia figures the company will need to install three or four Starry Beams at 20 to 30 sites. Each box will support about 1,000 users and boast throughput of 5 Gb/s, for a total of 15 to 20 Gb/s per site. They expect this rate will improve to 45 to 50 Gb/s per site in 2017, once the company upgrades its equipment to meet a new wireless standard known as 802.11ax.
Though Starry says it has cleared some of the biggest technical hurdles that millimeter waves pose for delivering high-speed Internet over the air, it must still find the right pricing model to bring the service to market. “There's no doubt that one could make a system work at 1.5 kilometer range at 37 GHz. In fact, that's a pretty modest range,” says John Naylon, CTO at Cambridge Broadband Networks Limited which operates several millimeter wave networks throughout the U.S. “The issues are going to be economic.”
Mastrangelo, the analyst, says that based on competitors’ rates, Starry would need to price its broadband plan below $100 a month, and ideally between $65 and $85 a month. Unfortunately, Starry’s heavy reliance on custom-built hardware means that its base stations are much more expensive than off-the-shelf models.
Meanwhile, plenty of other wireless providers are rushing to develop their own gigabit solutions. Though Google has more or less abandoned its costly fiber deployments, it recently purchased a company called Webpass that provides wireless broadband to entire buildings by installing rooftop antennas. (Starry offers a similar option for landlords who want to hook up their properties.) Verizon and AT&T have both said that they will launch trials for delivering over-the-air broadband in 2017. Mastrangelo warns that if Starry doesn’t act quickly, the start-up could fall behind.
“If they had been able to come out with a service when they first unveiled it in January, they would have definitely had a huge head start and probably positioned themselves to be an acquisition for somebody,” she says. “But their timing is not fantastic at this stage.”
Jonathan Wells, president of the wireless consulting firm AJIS LLC, says even if Starry can scale and solve the complications of serving hundreds of users at once through phased arrays without causing interference, competition could quickly undercut their plans.
“I think Starry may well be the first there with the technology and if they are successful, they'll get snapped up by Verizon or AT&T,” Wells says. “But I think offering a service that is competitive with Verizon and AT&T is incredibly hard.”
Kanojia says Starry will ultimately compete with its gigabit rivals by providing exceptional customer service, rather than focusing only on high speeds. The company expects to double in size from roughly 100 employees to 200 before the end of next year; among them will be its first batch of customer representatives. But while the Starry team has already proven it can deliver speed, they may find that providing top-notch customer response is more of an art than a science.
Apple’s personal assistant Siri is more of a glorified voice recognition feature of your iPhone than a deep conversation partner. A personal assistant that could truly understand human conversations and written texts might actually represent an artificial intelligence capable of matching or exceeding human intelligence. The Canadian startup Maluuba hopes to help the tech industry achieve such a breakthrough by training AI to become better at understanding languages. The key, according Maluuba’s leaders, is building a better way to train AIs.
There are all kinds of creative ways of dealing with rogue drones: Radio jamming. Other drones with nets. Trained eagles. None of these are really designed to handle military drones, however, and large, fast-moving UAVs are still a potential threat, especially if more than one is coming at you at once. It's no surprise that the U.S. Army has been developing solutions for this potential threat— we're not sure what they're working on now, but as of late 2013, Raytheon was successfully testing a long range, high power directed microwave weapon capable of taking out swarms of drones in milliseconds.
In virtual reality games (as in most games), your hands are what you use to interact with your environment, either directly or mediated through some virtual object (like a gun or a sword). But the experience of doing this is almost completely a one-way street: Maybe the controller that you're holding is fancy enough to vibrate a little bit, but that's about the best that you can hope for in terms of the interface physically stimulating you. For a more immersive experience, you want to engage as many of your senses as possible, not just sight and sound.
Besides incorporating motion into VR, adding convincing haptic feedback is the next logical step. It's a difficult step to take, though, because there's no obvious way to exert force on a handheld controller so that it feels like it's responding to the game while it's in your hand. But Tactical Haptics thinks it has the problem licked. It has spent the past few years developing a clever controller that can buck and twist while you're holding it.
One of the most critical components of a convincingly immersive virtual reality experience is the connection between the headset and the computer. Streaming high-resolution multiview video in real time demands a connection that can reliably handle sustained data rates of more than 6 gigabits per second, and as the resolution and frame rate of VR increases, bandwidth requirements are going to increase as well.
Speeds exceeding 6 Gb/s are easily achievable with a hard-wired connection, which explains why almost all VR systems have a big fat cable tethering the headset to a computer. This isn't an issue for stationary VR, but there are more and more options being introduced that make moving around (at least a little bit) an integral part of VR’s immersive experience. So, being chained to your PC is annoying at best and completely illusion-breaking at worst.
MIT has been working on a way to cut the cord, which sounds like the obvious solution. Current Wi-Fi standards can't handle the amount of bandwidth that VR needs, so MIT researchers have gone another route. They’ve created a system based on millimeter-wave signals (also the foundation of 5G), and directional phased-array "mirrors" that can bounce signals around a room in order to sidestep the line-of-sight issues common to millimeter-wave communications.
Virtual reality is having a moment, but the technology is still far from mainstream. Following the release of the HTC Vive and Oculus Rift in early 2016, headset sales quickly flattened once early adopters had purchased their gear. It seems the average customer isn’t as eager to pay US $600 for bulky goggles simply to peruse the rather limited catalogue of available VR content.
There was, not surprisingly, plenty of excitement about VR's potential among the crowd of creators, designers, entrepreneurs, and investors. There was also a healthy dose of realism about the state of the industry and the drawbacks of existing VR gear. “For VR to really work and succeed, it has to be so good that you want to put an ugly plastic thing on your face,” said David Liu, creative director for virtual reality at Viacom.
In the latest TOP500 supercomputer ranking, published today, China’s supercomputers are still at the top of the pile—but the United States has caught up in number. Both nations now claim 171 systems in the ranking. And they are roughly equal in terms of raw computing power.
Both nations added new systems to tie in terms of the number supercomputers that rank. They are followed by Germany, Japan, France, and the United Kingdom. China holds 33.3 percent of the total in aggregate Linpack performance and the United States leads slightly with 33.9 percent.
Most of the top 10 supercomputers remained unchanged, with China’s Sunway TaihuLight still clocking in first at 93 petaflops and Tianhe-2 still second at 34 petaflops. Two new supercomputers joined the top 10: the Cori supercomputer at Berkeley Lab’s National Energy Research Scientific Computing Center—skating into the number 5 slot with 14 petaflops—and the Oakforest-PACS at Japan’s Joint Center for Advanced High Performance Computing—taking the number six slot with 13.6 petaflops. Others systems fell to make room, except for Piz Daint at the Swiss National Supercomputing Centre, which maintained the number eight position thanks to newly installed GPUs.
Since last November, the total performance of all 500 computers on the list is 60 percent higher—672 petaflops.
The top 10 supercomputers from the November 2016 Top500.org list.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.