Over the last year, we’ve seen a transition from the peak of virtual reality (VR) hype into the much more tempered—and consequently less impressive—reality of commercial VR systems. The experience is still good, or as good as can be expected, but VR in general still suffers from a variety of minor annoyances, including a fat cable that connects your headset to your computer, external tracking systems that have to be set up, and the fact that you’re immersed in an artificial reality means that everything around you in real life is suddenly a collision hazard.
Intel’s Project Alloy is a virtual reality headset that solves all of these problems. It’s completely tetherless—all of the head and body tracking is contained inside the headset itself. It can sense objects in your environment and convert them into virtual representations of themselves, allowing you to wander around your living room without permanent damage to your shins.
Earth's atmosphere is a constant annoyance for anyone trying to do anything useful with light. Even if you discount things like clouds, smog, and smoke, there are layers and pockets of air of varying temperatures that routinely make things go all wobbly. This is why most halfway decent telescopes are built on the tops of mountains and all the best telescopes are out in space.
Things get even more difficult when you're trying to push a lot of light through the atmosphere with the goal of having it all end up exactly where you want it, as is the case with a directed energy weapon. Adaptive optics have been able to help somewhat, but wouldn't it be better if the atmosphere could actually do something useful? You know, for once?
BAE Systems has been working on a way to use lasers to actively reshape the atmosphere to turn it into a variety of optical tools. The Laser Developed Atmospheric Lens system (LDAL) uses powerful laser pulses to make air itself into lenses, mirrors, and even protective deflector shields.
In 2011, when Stanford computer scientists Sebastian Thrun and Peter Norvig came up with the bright idea of streaming their robotics lectures over the Internet, they knew it was an inventive departure from the usual college course. For hundreds of years, professors had lectured to groups of no more than a few hundred students. But MOOCs—massive open online courses—made it possible to reach many thousands at once. Through the extraordinary reach of the Internet, learners could log on to lectures streamed to wherever they happened to be. To date, about 58 million people have signed up for a MOOC.
Familiar with the technical elements required for a MOOC—video streaming, IT infrastructure, the Internet—MOOC developers put code together to send their lectures into cyberspace. When more than 160,000 enrolled in Thrun and Norvig’s introduction to artificial intelligence MOOC, the professors thought they held a tiger by the tail. Not long after, Thrun cofounded Udacity to commercialize MOOCs. He predicted that in 50 years, streaming lectures would so subvert face-to-face education that only 10 higher-education institutions would remain. Our quaint campuses would become obsolete, replaced by star faculty streaming lectures on computer screens all over the world. Thrun and other MOOC evangelists imagined they had inspired a revolution, overthrowing a thousand years of classroom teaching.
These MOOC pioneers were therefore stunned when their online courses didn’t perform anything like they had expected. At first, the average completion rate for MOOCs was less than 7 percent. Completion rates have since gone up a bit, to a median of about 12.6 percent, although there’s considerable variation from course to course. While a number of factors contribute to the completion rate, my own observation is that students who have to pay a fee to enroll tend to be more committed to finishing the course.
Tapping into an idea from quantum mechanics that dates back to the Jazz Age, researchers have created a new type of laser that could be much tinier than conventional lasers, potentially leading to faster optical communications and more powerful computers.
The laser relies on a phenomenon known as bound states in the continuum (BICs), which allows researchers to build a laser cavity in open air. “It’s not every day that you have the possibility to make a new type of laser,” says Boubacar Kante, a professor of electrical and computer engineering at the University of California, San Diego, who with his colleagues described their laser in this week’s issue of Nature.
This is my tenth year at CES. Every year, I spend much of the show wondering why I put myself through it. And then, maybe once per show, I get reminded of why I’m so lucky to be here doing what I do.
Last night, AxonVR reminded me that technology can be absolutely magical when a tiny virtual deer that took a warm and fluffy nap on my outstretched palm.
We use the term virtual reality to refer to headsets like the Oculus Rift and the HTC Vive: they make it look, and sound, like you’re in a virtual world. As much of an achievment as it is to fool our sight and hearing, we have other senses that we use to experience reality. The vast majority of VR systems completely ignore these senses, like touch, because they’re even more challenging to replicate: I mean, how the heck could you possibly create an arbitrary touch sensation instantly on demand? I have no idea. But AxonVR does.
This monster is AxonVR’s HaptX system. It’s an enormous metal box with some buttons and blinking lights, and it occasionally makes ominous noises. In case you can’t tell from the way it looks, it’s very much a prototype. You put on a HTC Vive headset, and then stick your left hand into that black protruding rectangle at the front, palm up.
Both IEEE Spectrum editor Amy Nordrum and I tried this thing out, and we really had no idea what to expect: AxonVR wouldn’t tell us exactly what was going to happen beforehand, almost certainly so they could watch us totally freak out, because we totally freaked out. Or at least, I did; Amy was much more professional about the whole thing.
Here’s how she describes the experience:
I poke an apple and bring it over to drop it on my left hand. Immediately, I feel the familiar shape of an apple in my hand. It's round and smooth on every side, just like it looks. Neat.
Next, I select a nondescript cube and place it in my palm. I feel its slight weight. Then, it starts to change colors. First it’s red, and my hand feels warmer. Then it’s blue and my hand is cooler. The change in color and the change in sensation happens simultaneously. I wasn't expecting to feel a temperature gradient through this haptic VR system, and it’s a pleasant surprise.
I see a grenade on the table and drop it on my hand. The grenade erupts and spews flames down, engulfing my palm and fingers. In addition to feeling the heat, I feel that heat come in waves as the flames spread over my hand.
The obvious next choice is a large piece of ice waiting helpfully on the table. It settles in my hand with a soothing chill. I move it across my fingers and over my palm, and the sensation follows with no detectable latency.
Then, I select an object I’ve been avoiding— a spider, with its angled legs and erratic movements. The lightness of the spider’s legs, spread over the eight points of its wide stance, immediately transfers to my palm. Then, the spider starts to move. It picks up its front two legs and I feel a difference in sensation, with perhaps slightly more weight at the other six points. The spider shuffles around and I feel its many legs, like eight little pencil points pressing lightly on my palm.
Lastly, I select a miniature deer, beige with a large rack of black antlers, that has been resting off to the side since I began the demo. I drop the deer into my hand and feel its hooves plant firmly in my palm. The sensation is not at all like the spider, and it shouldn’t be, with more heft behind each hoof. The deer fidgets and I feel its weight shift in my palm as it turns, and the stroke of a hoof when it starts to paw at my hand.
Then, the deer starts to lay down. Its legs fold up beneath it and its furry belly rests in the center of my palm. I can feel it breathe, and the spot on my hand where it’s resting starts to warm. The experience of holding a tiny deer in my hand and feeling its hooves is a neat trick, but the ability to actually feel it breathe and sense the warmth of its furry body brings on an entirely different dimension of emotion. It feels like my pet, and I want to keep it.
The demo ends with a grand finale whereby a flying dragon alights on the top of the silver box in front of me and blows a mouthful of flames onto my hand with spectacular flair. It is a good dragon, and the flames feel sufficiently fiery. Still, it is that tiny deer that leaves the fondest impression, and leave me feeling blown away at the sensitivity and fidelity of the AxonVR haptic technology.
Somehow, this mystery box is able to replicate tactile and thermal sensations, with force feedback. It did it all at once, and very compellingly. I spent most of the demo giggling like a little kid. The resolution of the tactile sensations was spectacular, from the edges of a cube, to the roundness of an apple, to the tiny prancing hooves of the deer. And when the deer tucked its little hooves underneath itself laid down, I could feel how warm and soft its belly was. The warmth was immediate, just like you’d expect, and it was the same with the dragon blowing fire, or the ice cube sliding around my palm: there wasn’t any noticeable latency, and the thermal sensations were powerful enough to be completely convincing. Ice felt like ice feels, and fire feels like you’d imagine fire feeling if you could hold it without actually burning yourself.
Saying that AxonVR has come up with a convincing simulation would be underselling what the HaptX experience is like. Other haptic systems we’ve tried require some (or occasionally lots of) imagination to transform a generalized touch sensation into something that matches what you’re seeing inside of your VR headset. With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary: or rather, I can’t imagine how anything short of a full-on Star Trek holodeck would be able to make a tiny deer that feels any more real than the one that napped on my palm last night.
We’re as curious as you probably are about what, exactly, is inside that enormous box. Personally, I think it’s suspicious that the box is just large enough to contain an apple, an ice cube, a very large spider, a very small deer, and a dragon, because that would explain how it works as well as it does. AxonVR mentioned something about “an array of microfluidic actuators,” but we’ve been promised more details once things calm down a bit after CES.
While what we tried out last night is certainly futuristic all by itself, AxonVR’s website promises even more, including an entire haptic exoskeleton that allows you to feel force over your entire body. This video from last April shows a bonkers concept image of what their final system might look like, along with a general description of the tech and some behind the scenes on the hardware itself:
AxonVR closed a $5.8 million seed round of funding last month, which is a lot, although not a surprising amount assuming they let prospective investors spend five minutes with a hand in that magic box. The company is looking at several different potential markets, including design and manufacturing, training and simulation, and location-based entertainment. That last one is where you’re most likely to be able to experience this for yourself: AxonVR’s press materials suggest that the company is not imagining you would have their system in your home, but more likely, you’ll find it at arcades or theme parks. There’s no timeline for any of this quite yet, but AxonVR is working on partnerships, and we’re expecting additional announcements later this year.
This year at CES, a little-known company with no booth or speaker slot is quietly talking about a technology that could bring about one of the most visible changes to smartphone design of 2017. It has developed software that would let manufacturers remove a common component and so create almost-edgeless smartphone screens that run up to the very top of the device. And in the past three months, this company has been contacted by every major smartphone manufacturer in the world.
Let’s back up for a moment. On your smartphone right now, there’s probably a little dot or narrow sliver right above your screen that’s a proximity sensor. When you make a call and place the phone to your ear, the screen turns off to save power and prevent you from accidentally hitting buttons with your cheek. It does this by emitting infrared waves and then measuring their reflections to determine how close the phone is to your head.
The original iPhone, released in 2007, was the first smartphone to use a proximity sensor. Since then, the proximity sensor has become a standard mobile component. It’s typically housed in a rectangular bit of casing that takes up about half a centimeter or so of real estate across the top of the screen.
But this staple of smartphone design may be on its way out. In October 2016, Xiaomi announced the Mi MIX, which features a display that runs right up to the top edge of the device, with no proximity sensor in sight. Reviewers raved about the almost-edgeless display and applauded Xiaomi’s ingenuity. Some even hinted that Apple and Samsung would be taking cues from the Mi MIX for their highly anticipated releases of the iPhone 8 and Galaxy S8 in 2017.
Though Xiaomi got the credit, the company behind that almost-edgeless display was Elliptic Labs. And according to an Elliptic representative who spoke with IEEE Spectrum here at CES in Las Vegas, consumers should expect to see many more almost-edgeless smartphone screens debut this year, all inspired by the Mi MIX design.
“Ever since this phone has been released, all the mobile OEMs [original equipment manufacturers] have been contacting us,” said Angelo Assimakopoulos, VP of sales and business development for Elliptic Labs. “Without naming names, I can tell you almost everyone is going this route.”
Elliptic Labs sells software that generates ultrasound pulses from a smartphone’s speaker and measures them using its microphone, as an alternative method to proximity detection. This technique allows manufacturers to get rid of traditional proximity sensors altogether, and so stretch the display all the way to the phone’s upper edge. Ultrasound may also prove more reliable for consumers, since proximity sensors can be affected by smudges or bright light.
The pulses that Elliptic generates are between 30 kilohertz to 40 kHz, well above the 20 kHz limit of human hearing. Assimakopoulos says most smartphone speakers can generate pulses at these frequencies, though the company finds that MEMS speakers work best for generating them.
The company, which has about 30 employees and operations in the United States, Norway, and China, has worked on its software for almost 10 years. Now, they think conditions are favorable to roll it out. Consumers are watching more videos and playing more games on their smartphones than ever before. In response, manufacturers are producing phones with larger screens and maximizing every square millimeter of space. For example, Samsung has increasingly displayed content along the sides of screens. An obvious next step may be to expand the screen from the phone’s top to bottom, with Elliptic’s help.
By using their software, Elliptic’s Assimakopoulos estimates that smartphone manufacturers could increase screen area to between 90 to 95 percent of the phone’s facing side, from the standard 75 to 80 percent on most smartphones today. The Mi MIX has a screen-to-face radio of 91.3 percent.
Elliptic’s technology alone won’t allow manufactures to create entirely edgeless (or, as they’re known in the industry, “bezel-free”) designs, though. Aside from the proximity sensor, several other components such as the home button, camera, and speaker still live on the face of most smartphones.
It’s hard to say whether Elliptic’s technology would lower the cost of manufacturing a smartphone, or add to it. On one hand, it replaces the proximity sensor with software that uses existing hardware. This also frees up some space inside the phone, since the proximity sensor, which itself measures approximately 3 by 2 millimeters, can be removed. On the other hand, manufacturers must pay to license the software, and installing a larger liquid-crystal display adds to a phone’s cost.
Now that Elliptic is drawing attention from manufacturers, Assimakopoulos hopes to work with as many partners as possible by licensing their patented software. He declined to comment on whether Elliptic might also be a candidate for acquisition, which would allow one company to keep their system from competitors. “We’re talking to several OEMs,” he said. “I will tell you that all OEMs have approached us inquiring about our technology since this phone was released.”
Though Assimakopoulos was hush-hush on the details of his conversations with manufacturers, he’s very confident that you’ll be seeing more almost-edgeless smartphone screens very soon. “There will be another phone that shows up this year with this technology,” he said.
In the middle of all of the autonomous car promises, slightly thinner and brighter televisions, and appliances that spy on you in as many different ways as they possibly can were a small handful of social robots. These are robots designed to interact with you at home.
People responding to IEEE Spectrum’s live twitter feeds as we covered each announcement, pointed out that these little white social home robots all look kinda similar to each other, and they also look kinda similar to that little white social home robot that managed to raise $3.7 million on Indiegogo in September of 2014: Jibo.
To show what we’re talking about (if you haven't been following along with our CES coverage, and you totally should be), here are three new social home robots (Kuri, Mykie, and Hub) that were announced Wednesday, along with Jibo for comparison.
White. Curvy and smooth. Big heads on small bodies. An eye or eyes, but no ears or mouth, and no arms. A lot of design similarities with what is arguably the very first social home robot to (promise to) be commercially available (eventually):
The question, though, is just why exactly these smooth roundish curvy big-headed white robots all look the way that they do. Why do they look a bit like Jibo, and why does Jibo look the way it does?
“We designed a very clean, modern looking robot that’s friendly,” Jibo’s VP of Marketing Nancy Dussault-Smith told me yesterday. “I can understand why people want to have that kind of thing in their homes.”
Kaijen Hsiao and Sarah Osentoski, Mayfield Robotics’ COO and CTO, told us something very similar about their robot, Kuri: “People are very picky about what goes in their homes,” says Hsiao. “It’s very hard to build something that matches everyone's decor, and the closest you can come is very minimalist and white. Also, if you want to hide sensors, windows that are transparent to IR are generally black, which is why you see robots with so much black.”
The robots all tend to be smooth and curvy not just because it’s pleasing to the eye (conveying softness with organic and symmetrical shapes), but also because it's safer, especially with a robot that moves or that that you're supposed to interact with touch. And round heads are the easiest to move up and down and rotate while also concealing the mechanical joints and electronics inside.
The specific proportion between the head and the body was, for Jibo, a very carefully thought out design decision, said Dussault-Smith. Jibo’s head is oversized because it’s intended to be somewhat reminiscent of the cuteness of baby animals (humans included), which have disproportionately large heads. For Kuri, practical issues also came into play: the robot needed to be a certain height in order to provide a decent view of your home through its eye camera, which helped define the size of the head and the base needed to keep the robot stable.
Jibo and Kuri also have substantially different philosophies when it comes to eyes. “Our original idea was to have a small screen that had eyes, and we were doing all of these crazy things to try to hide the rest of the screen,” Osentoski told us. “We had decided early on character wise that if you show anything but the eyes on the screen, you destroy the character, because it’s not a face anymore,” continued Hsiao. “Finally, I said, ‘if we only want the screen to show eyes, why don't we just make physical eyes?’”
Meanwhile, “Jibo’s one eye was a very deliberate choice,” said Dussault-Smith. “Two eyes caught you a little in the uncanny valley; it felt a little too real. One eye was still able to have that communication, but without as much of the intimidation of it being like a person.” And Jibo, of course, has a screen that can display all kinds of other information as well.
The struggle to keep robots from being unconsciously anthropomorphized and then failing to live up to human-like expectations is another major driver of social robot design. This is where much of the minimalism comes from— avoiding human features as much as possible, especially around the robot’s face, helps to prevent users from feeling like that the robot they’re interacting with should respond the way a human would. At this point, robots that try too hard to seem human can only disappoint.
There are some very good reasons why the robots that people like and are comfortable with tend to share design characteristics. Being white helps them fit in with other decor. Being smooth and round helps them be safe. Minimalist faces help keep expectations in check, while round heads are the simplest to build. We're going to see a lot more robots like this, especially if Kuri, Mykie, Hub, and Jibo turn out to be commercially successful. What I think is more interesting than focusing on how similar they are, is to instead look at why they’re different, and what those differences mean about how those robots will interact with us.
Fundamentally, as Jibo’s Nancy Dussault-Smith points out, “what really differentiates robots is what's on the inside.” Even if all of these social home robots really did look exactly the same, they're intended to do different things in different ways. Maybe some will be more successful than others, or maybe they'll all find their own niches: none of them are for sale yet, so it's much too early to tell, but we're definitely looking forward to finding out.
Most wireless electronic devices look the way they do because of their batteries. Batteries are bulky and rigid components that can’t easily be modified or eliminated. So when product developers design a new device, the battery is often the limiting factor in determining size, shape, and flexibility.
But this year at CES in Las Vegas, Panasonic is showing off a new type of lithium-ion battery that could change that. The razor-thin silver wafer can be twisted or bent 1,000 times and still maintain 80 percent of its capacity. Panasonic presented three versions of the new battery at CES: Each is slightly smaller than a credit card and bendy enough to fit around a soda can’s contours.
The project has been under way since 2008, but Panasonic just started talking about it in September. Yoriko Yagi, assistant chief of planning in Panasonic’s wearable energy department, said the battery is now ready for mass production, which she expects will begin sometime between April 2018 and March 2019.
“Our small devices are increasing, like wearables and IoT devices, and the product design depends on battery size,” Yagi said. “If we want small devices, we need small batteries.”
Panasonic’s bendable battery, which is just 0.45 millimeter thick, is relatively low capacity. The largest version, known as CG-064065, has a maximum capacity of 60 milliampere hours (mAh), and the smallest comes in at 17.5 mAh. For comparison, the largest smartphone batteries boast around 3,500 mAh.
That means the new battery is best suited for wearables, cardlike devices, and Internet of Things applications. The company began providing samples to potential clients in October, but it has not yet publicly stated a price for the battery.
In the future it may be possible to scale up its design to create a flexible smartphone or tablet. “Basically, we can do that, but we’re not targeting on that product,” Yagi says. Lest we forget, high-capacity lithium-ion batteries, like the ones found in smartphones, are prone to fires and explosions (RIP, Galaxy Note 7). Safety concerns have led Panasonic to focus on wearables and IoT devices and their low energy needs, for now.
Plenty of other companies and researchers are working on flexible lithium-ion batteries, but Yagi said Panasonic’s version offers the best performance and most rigorous testing record to date.
To create it, the company had to rethink some classic elements of battery design. Lithium-ion batteries are made up of an anode—a positively charged electrode, made of lithium oxide—as well as a cathode, which is a negatively charged electrode, made of graphite. These two layers are separated by an electrolyte, which is a liquid or gel substance full of ions. As a battery is charged, lithium ions pass through this solution and are stored in the cathode. When the battery is powering a circuit, the ions pass back through the electrolyte to the anode.
Most of the time in lithium-ion batteries, the individual cells where all of this happens are cylindrical in shape, with the anode and cathode layers wrapped around each other. Bending or twisting the cylinder causes the outermost layer to move further relative to the innermost layer. As a result, the electrodes lose their alignment, and the battery capacity erodes over time.
To make a flexible version, Panasonic researchers decided to ditch the cylinder approach and stack the electrodes right on top of each other in a slim rectangular wafer, rather than wrapping them around one another. Then they enclosed the battery in a proprietary flexible casing made of a secret aluminum compound.
How to charge the battery was another challenge. The battery must be recharged wirelessly because Panasonic didn’t want to make room for, say, a thick USB cable port on its otherwise slim form. Unfortunately, most wireless chargers on the market deliver so much power that it would overwhelm the capacity of the tiny device, which can handle a maximum charging current of just 60 milliamperes.
In the end, Panasonic built its own wireless charging stations, which it will sell along with the battery to customers. Yagi said a single charge may last for about four weeks if the battery is used for a relatively simple application such as counting steps in a fitness monitor. With CG-064065 and its siblings poised for mass production, Yagi is looking forward to exploring that possibility, and many more. “I think there’s a lot of potential in the future for this kind of battery,” she said.
Editor’s Note: This article has been updated to reflect a corrected thickness of the battery and date range for when the battery will enter mass production by Panasonic.
The beeping, flashing, pulsating glory of the world’s largest consumer electronics trade show has returned to Las Vegas. The first batch of new products and services went on display at CES on Tuesday, and startups and industry giants will debut more gadgets and technologies throughout the week.
Just a few of the curious wares spotted by IEEE Spectrum editors last night include a battery-powered scarf that filters air pollution, a hairbrush that uses sound waves to analyze dryness and frizz, a smart cane that detects falls, and a connected cat feeder that avoids overfeeding by recognizing felines by implanted microchips. Also, a US $120 camera that lets you stare at the inside of your refrigerator, should you ever choose to do that (assuming the milk isn’t blocking the view).
Major technology companies have also begun to make their announcements about new products they will launch in 2017. Qualcomm released its newest chip, the Snapdragon 835, which, rumor has it, could turn up in Samsung Galaxy 8 smartphones later this year. Huawei said its newest Honor smartphone, called the 6X, which boasts a battery life of 2.1 days and costs only $250, is now available in the United States. And Faraday Futureunveiled its long-awaited self-parking FF 91 electric car, which integrates more than 30 sensors including cameras and a retractable lidar system to navigate into a parking space all on its own.
Looking at deeper trends, several experts said the most meaningful long-term developments will come from the companies scraping away at voice recognition. Once we master it, they believe, voice-recognition capabilities will fundamentally change the way we interact with and build electronics.
This was a strong element of Tuesday’s analysis of the global consumer market by Shawn DuBravac, chief economist, and Steve Koenig, senior director for market research, of the Consumer Technology Association (CTA), which runs CES. In DuBravac’s opinion, voice-recognition technology has improved enough in the past few years that it is now poised to usher in an era of so-called faceless computing.
In particular, the word error rate for voice-recognition systems dropped from 43 percent in 1995 to just 6.3 percent this year, and is now on par with humans. “We have seen more progress in this technology in the last 30 months than we saw in the first 30 years,” DuBravac said. Another analyst attending CES that I spoke to was Ronan de Renesse, a consumer technology analyst for the business intelligence firm Ovum, who said he was watching a startup called Voicebox, which has worked on voice recognition for partners including Samsung, AT&T, and Toyota.
In addition to redefining the traditional computer interfaces, voice recognition could improve a host of products that are already on the market. CTA estimates total sales of voice-activated digital assistants such as Google Home or Amazon Echo to be about 5 million units to date, and expects that to double to 10 million in 2017. With all of these products, clarity and functionality are key. DuBravac figures there are currently about 1,500 apps (called “skills” in Amazon-speak) that can interact with Alexa, Amazon’s voice-activated personality and says he would not be surprised to see 700 new ones announced just this year at CES.
Aside from voice recognition, de Renesse also thinks that virtual reality and augmented reality will be “at the forefront of CES” again this year. These technologies had a flagship 2016 with the release of the HTC Vive and Oculus Rift headsets, but some have since complained that the technology isn’t catching on as quickly as they’d hoped. One reason could be that there’s still a profound lack of high-quality VR and AR content to enjoy for those who do shell out $600 or more for a headset.
DuBravac says these criticisms are partly a symptom of too-high initial expectations for VR, and not necessarily a reflection of the technology itself. He’s still optimistic, however, because he sees companies investing in VR content. As for his assessment of the progress made in 2016: “If you had realistic expectations about what would happen and the deployment of hardware would look like, then I think you saw a market starting to take hold,” he says.
Anshel Sag, an analyst at Moor Insights & Strategy, is also frustrated by the proclamations that VR is struggling. Even though he doesn’t expect any major VR announcements at CES 2017, he says that’s because the product-release cycles of VR companies simply didn’t sync up with CES this year. But he cautions anyone from reading too much into this.
Nonetheless, several headset manufacturers and content developers are planning to put their best foot forward at the show. Samsung will continue to push mobile VR, which operates on less expensive headsets, such as the $60 Samsung Gear, that allow you to insert your smartphone to stream VR. Sag has also been impressed by a company called ODG, which is working on a pair of heavy-duty eyeglasses that convert from viewing in AR to VR.
Funnily enough, CES might also be at least partly to blame for VR criticisms. Every year at the show, analysts and journalists try to predict the new fads and hottest products that will redefine consumer technology as we know it. Too often, they are surprised when those trends fail to materialize or reach the adoption rates they had expected.
The truth is that breakout tech stars are a relatively rare sight, even at CES. In fact, the vast majority of global consumer tech spending—80 percent—goes toward just seven types of products. The CTA’s Steve Koenig calls those technologies the “magnificent seven.” That includes smartphones, laptops, tablets, desktops, digital cameras, TVs, and smart watches (a recent addition as the Apple iWatch outpaced the iPhone in first-year sales).
On their own, smartphones account for a staggering 47 percent of global consumer spending on technology and remain the “center of the consumer tech universe,” as Koenig puts it, with their own ecosystem of apps and services. But to be fair, they were also first released more than a decade ago.
Rather than looking at everything through the lens of mass adoption, DuBravac says the market for most tech products is actually very fragmented. As an example, he points to the wearables market and the $125 VERTbelt for athletes that measures their jumps during practice and games. In his assessment, lots of startups will offer products for a specific use and find plenty of customers without ever reaching mass adoption. And that’s fine, too.
Update: This story was updated on 4 January, when ISRO increased its launch count from 83 to 103 and moved the launch date into February.
As of November, a total of 564 nanosatellites have been launched into space. In February, the Indian Space Research Organisation aims to launch a combination of 103 satellites on a single rocket—reportedly a world record. The same month, U.S. startup Spaceflight Industries plans to send up a module designed to support the launch of up to 87 satellites.
Neither the Indian Space Research Organisation (ISRO) or its commercial arm, Antrix Corporation, responded to requests for comment. But Spaceflight Industries senior mission manager Adam Hadaller described putting together launch missions for large numbers of small satellites as “herding cats…. It’s very hard.”
Once you get them in space, nano, cube, and other small-scale satellites have several applications—from monitoring weather to helping farmers decide where to water or fertilize crops—all at a significantly lower price than traditional-scale satellites. Several startups and space agencies, such as ISRO and Spaceflight Industries, are working to launch more and more of them at the same time, further reducing costs.
Launch: The first challenge begins before launch, Hadaller says. Satellites can come from different countries, and it’s necessary to check all the various safety regulations, communication licenses, and technical requirements. The different separation systems, for example, need compatible adapters.
Then there is a choice to make: Piggyback the satellites as secondary payload on a rocket that’s already heading to space, or mount a dedicated mission? However, when piggybacking, satellites don’t have much choice in their orbits, which limits the variety of possible scientific experiments.
A SpaceX Falcon 9 rocket is set to launch a Spaceflight Industries module in February called Sherpa—containing small satellites—as secondary payload. In the mission, Falcon 9 will launch its primary payload and then deploy Sherpa after an orbital maneuver. Half an hour later, Sherpa will release its satellites.
Hadaller says that in the case of the Sherpa mission, the main limitation of the module itself is interest: As of 12 December, only 33 satellites were on the manifest for its 87-satellite vehicle.
If piggybacking isn’t needed, a dedicated launch can provide better orbital options. For example, on 12 December, Orbital ATK launched a Pegasus rocket containing eight CubeSats designed to monitor hurricane development in the tropics. The satellites deployed at a 510-kilometer altitude at an inclination of 35 degrees; over time, they spread out over the entire orbit. Their inclination gives full coverage of the tropics.
Communications: Usually, satellite owners communicate with their satellites over radio by pointing antennas on the ground at satellite locations. The better the aim, the stronger the signal, so satellite operators find their satellite’s location by using some combination of onboard GPS, trajectory estimation data, large telescope arrays, the JSpOC satellite tracker, or radio ranging.
But if all the small satellites can be identified, then radio interference can be a problem. The frequences they often use to communicate with over radio could become crowded by satellites and ground-based radios or cellphones, says Bruce Yost, who directs a NASA institute for small satellite outreach called the NASA Small Spacecraft Systems Virtual Institute.
He says one solution is to communicate at higher frequencies that are less likely to suffer interference, but this requires extra power. Another, less power-hungry solution researchers are considering is to transmit data from space to ground by laser—the drawback being that the optical link would need “even more accurate pointing” than radio communications.
Collision: Mass deployment also runs the risk of becoming a mass of space debris, some say. Spaceflight Industries says its team has not run an updated analysis of the exact probability of its satellites colliding with one another or another object in space, but Hadaller says it is “extremely low.” Also, all the tech meets international space community requirements meant to prevent debris, including deorbiting by the satellite’s 25th year. The Sherpa module itself will stay in orbit for 10 to 18 years and the satellites between 3 and 10 years, before they reenter Earth’s atmosphere and burn up.
Mike Safyan, director of Launch and Regulatory Affairs at Planet Labs, an Earth imaging company that makes small satellites, believes that the demand for launching large numbers of rockets is low for now, but “if the companies are successful, then we’ll see more of these kinds of large cluster launches.”
Yost says that there will be at least five NASA-sponsored, small, cube-shaped satellites called CubeSats on the upcoming Spaceflight Industries launch, which has been delayed from late 2016 to 2017.
“The capability of these CubeSats is really, really advancing quickly,” he says. Advancements in computer processors have made it possible to do “extensive” data processing and analysis directly on board a small satellite. Improvements in design and fabrication are also making them more robust, to better survive the harsh environment of space.
Jordi Puig-Suari, an aerospace engineer at California Polytechnic State University, in San Luis Obispo, helped design the original concept for CubeSats. “The timeline is one thing that we have to work on,” he says. “The satellites can be developed very quickly,” but getting them into space might not happen at the same speed.
But, he says, the benefits of mass deployment of small satellites are clear. “Having a larger number of lower-cost missions will allow us to go to a lot more places,” Puig-Suari says.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.