Tech Talk iconTech Talk

An illustration of a key lock surrounded by electrical cords to represent net neutrality.

Is Net Neutrality Good or Bad for Innovation?

The Trump administration is just days away from taking the reins in Washington, D.C. and many technologists are wondering what its arrival will mean for net neutrality. The new leadership is flanked by advisors who have opposed net neutrality in the past, suggesting that U.S. policy could be challenged or even reversed in the year ahead.

The Obama administration made the strong defense of net neutrality a hallmark of its Internet policy, based on the belief that preserving it benefits consumers and promotes innovation. Now, Trump’s administration is likely to argue the opposite, and say that tossing out net neutrality is really what’s best for consumers, companies, and the U.S. economy.

Major tech companies have also lined up on opposing sides of the debate. Google, Apple, Amazon, and Netflix support net neutrality, while Internet service providers (ISPs) including Comcast, Verizon, and AT&T have long railed against it. Everyone argues that their position will spur innovation and economic growth. So who’s right?

Unfortunately, there is no clear answer. Economists have done plenty of modeling on net neutrality over the past eight years, but there isn’t a strong consensus about whether keeping it or throwing it out would be best for consumers, innovation, or the economy. “Nobody has much data,” admits Gerald Faulhaber, an economist and professor emeritus at the University of Pennsylvania who has done his own review on the matter.

To better understand the most compelling arguments about how changing U.S. policy on net neutrality might boost innovation, or kill it, IEEE Spectrum spoke with Nicholas Economides, an economist at New York University who supports net neutrality, and Faulhaber, who generally opposes it.

Let’s start by defining our terms. As you may recall, net neutrality is a regrettably bland term for an important idea: ISPs such as Verizon and Comcast ought to deliver all online content to consumers in the same way, without granting preferential treatment to any particular content.

While it’s worth noting that this concept can be interpreted in many ways, it often means prohibiting ISPs from asking content producers, including Netflix and Google, to pay a higher rate so that their content is delivered at faster speeds.

From the perspective of ISPs, not all content is created equal—if millions of Comcast customers want to live stream the Super Bowl from NBC, that’s going to require a lot more bandwidth and network resources from Comcast than if they were all trying to load a simple website. ISPs argue that allowing them to charge content creators (such as NBC) based on how much bandwidth they consume, or how fast their content must be delivered, is more fair.

Clearly, this policy would also generate extra revenue for ISPs, which they say they will reinvest back into their own networks. This argument is tempting because right now, many ISPs have a near-monopoly in their service areas with little incentive to improve their service (a situation that has led to notoriously low customer satisfaction). If they could charge clients more to move content, ISPs may be more motivated to develop faster service.   

Skeptics say that ISPs won’t spend very much money on achieving faster speeds when they could just slow down service for any content producer who doesn’t fork over enough cash. “What matters to consumers is relative speed,” says Economides. “Being able to delay things creates artificial scarcity, for which content providers are willing to pay.”

Economides also says investors will want any profits returned to them, rather than ploughed into infrastructure. “As soon as AT&T makes money from violating net neutrality, there will be tremendous pressure to give that money to the shareholders,” he says.

Meanwhile, net neutrality supporters say the real innovation at stake is that which comes from content producers such as Netflix, Google, and Facebook. If content producers have to pay ISPs in order for their content to be delivered quickly to customers, that money is not being invested back into those companies.

There’s also a lot of concern for the little guys—small content producers who may not be able to afford the rates ISPs will charge for faster delivery. Critics fear that booting net neutrality will create a “two-tiered” Internet wherein wealthy companies pay for content to be delivered at high speeds, while content from startups or small web publishers languishes in the slow lane.

“The greatest threat to innovation is if new companies, innovative companies, have to pay a lot to be on the same playing field as everybody else,” Economides says. Net neutrality supporters worry there might be secondary effects from limiting the free flow of ideas and information online.

In a sense, though, we already have a two-tiered Internet. Wealthy companies can pay for web designers and software engineers to program sites that load faster and respond more quickly to commands. And a handful of publishers already own most of the sites that we spend the majority of our online time perusing.

Faulhaber doesn’t think eliminating net neutrality would deal a devastating blow to small content producers. He compares tiers of Internet speeds to the various delivery services offered by a post office. One company can pay a little extra for expedited service while others who don’t need rushed delivery simply pay the standard rate.

“We've heard this about how only large firms will be able to do it, but if you look at the way the economy works—that’s not true at all,” he says. “The only people who will buy it are those that need it, and if you’re a small firm and you need it, you’ll buy it.”

In fact, Faulhaber, thinks new services could pop up—such as in robotic surgery or gaming—if content producers were guaranteed access to a high speed lane. “I don’t see this as removing opportunities, I see this as creating opportunities for innovation,” he says.

Right now, there are still a lot of unknowns about how a post–net neutrality world would actually operate. It’s not clear how much faster or slower content might be delivered, or what fees an ISP would charge for each service. Given those gaps in knowledge, it’s hard to know how these expenses might impact small or large businesses, and affect ISPs or content producers.

And if net neutrality disappears, what might that mean for consumers? Well, if ISPs could charge content producers more to cover the expense of maintaining their network, they may charge consumers less for home service. Of course, content producers could also wind up passing along the cost of the extra fees they must now pay to consumers, zeroing out any cost savings from ISPs.

While a drop in prices for consumers is theoretically possible, no one knows if it would actually happen in a post–net neutrality world. “There’s no evidence that if net neutrality were violated today, the price would fall,” Economides says. “There could be some theoretical model in which case it would happen, but there are also other models where it doesn’t happen, so there is no bottom line on that.”

Consumers may also benefit from “zero rating,” a perk that some ISPs have already begun to offer. With zero rating, an ISP forgives any data that a customer spends on specific content. For example, AT&T said in September that it would not count programs streamed through its own DirecTV Now service against customers’ monthly data limit, much to the FCC’s chagrin.

Clearly, net neutrality is a complicated issue with many factors at play that can be tweaked to produce both positive and negative scenarios. To try to sum up all of these variables and their potential outcomes, Economides did an analysis in 2015 to search for the “total surplus” in the market for Internet service. Total surplus is an economic term that identifies the conditions that will bring the greatest benefit to both producers and consumers.

In Economides’ case, he tried to figure out whether keeping or ditching net neutrality would bring us closer to total surplus, by factoring in the profits of ISPs, the profits of content producers, and the benefits to consumers. In the end, he found that sticking to net neutrality “tends to maximize total surplus” for society.

With that in mind, he also thinks preserving net neutrality would promote innovation and economic development across the most businesses. “If you’re a Republican and you believe you should support business, then you should be pro–net neutrality because most businesses benefit from net neutrality,” he says. “The only businesses that benefit from violations of net neutrality are ISPs. If you think about it, 95 percent of businesses in the U.S. are not ISPs.”

Faulhaber, of course, sees any impact from doing away with net neutrality as simply another reasonable cost of doing business. “Just like we see in other businesses that offer different qualities of service, you get to choose what you want, and that’s fine,” he says. “This is standard in the economy.”

So what will actually happen with the new administration? For the past few years, net neutrality was preserved by the U.S. Federal Communications Commission under Obama’s watch. Now, though, Trump could change all that. His transition team includes several opponents of net neutrality, including Vice President–elect Mike Pence and Rep. Marsha Blackburn. And the trio he selected to coordinate the handoff of the FCC to his new administration are all against it.

In the coming weeks, Trump will also have the opportunity to appoint three new commissioners to lead the FCC, including a Chair. His new appointees will join two current Republican members—Ajit Pai and Michael O’Reilly—who have both criticized net neutrality. The FCC is an independent federal agency, which means it’s overseen by Congress, which is also now under Republican control.

That said, anyone who tries to reverse U.S. policy on net neutrality will likely have a difficult road ahead. Public sentiment will not be on their side—when the FCC solicited public comments on the issue in 2014, they received a record 3.7 million comments, with the vast majority in favor of net neutrality.

Discarding it, Economides says, may not be as simple as net neutrality opponents would like to believe. “I’m not saying it’s not going to happen; I’m just saying it’s not going to be easy,” he says.

Intel Project Alloy

CES 2017: Intel's Project Alloy Is the VR System I Want in My Living Room

Over the last year, we’ve seen a transition from the peak of virtual reality (VR) hype into the much more tempered—and consequently less impressive—reality of commercial VR systems. The experience is still good, or as good as can be expected, but VR in general still suffers from a variety of minor annoyances, including a fat cable that connects your headset to your computer, external tracking systems that have to be set up, and the fact that you’re immersed in an artificial reality means that everything around you in real life is suddenly a collision hazard.

Intel’s Project Alloy is a virtual reality headset that solves all of these problems. It’s completely tetherless—all of the head and body tracking is contained inside the headset itself. It can sense objects in your environment and convert them into virtual representations of themselves, allowing you to wander around your living room without permanent damage to your shins.

Read More
Laser

Laser Weapons Will Turn Earth's Atmosphere into Lenses, Deflector Shields

The Earth's atmosphere is a constant annoyance for anyone trying to do anything useful with light. Even if you discount things like clouds, smog, and smoke, there are layers and pockets of air of varying temperatures that routinely make things go all wobbly. This is why most halfway decent telescopes are built on the tops of mountains, and all the best telescopes are out in space.

Things get even more difficult when you're trying to push a lot of light through the atmosphere with the goal of having it all end up exactly where you want it, as is the case with a directed energy weapon. Adaptive optics have been able to help somewhat, but wouldn't it be better if the atmosphere could actually do something useful? You know, for once?

BAE Systems has been working on a way to use lasers to actively reshape the atmosphere to turn it into a variety of optical tools. The Laser Developed Atmospheric Lens system (LDAL) uses powerful laser pulses to make air itself into lenses, mirrors, and even protective deflector shields.

Read More
An illustration of a brain linked to multiple computer screens on desktops where people sit is evocative of a MOOC.

How the Pioneers of the MOOC Got It Wrong

In 2011, when Stanford computer scientists Sebastian Thrun and Peter Norvig came up with the bright idea of streaming their robotics lectures over the Internet, they knew it was an inventive departure from the usual college course. For hundreds of years, professors had lectured to groups of no more than a few hundred students. But MOOCs—massive open online courses—made it possible to reach many thousands at once. Through the extraordinary reach of the Internet, learners could log on to lectures streamed to wherever they happened to be. To date, about 58 million people have signed up for a MOOC.

Familiar with the technical elements required for a MOOC—video streaming, IT infrastructure, the Internet—MOOC developers put code together to send their lectures into cyberspace. When more than 160,000 enrolled in Thrun and Norvig’s introduction to artificial intelligence MOOC, the professors thought they held a tiger by the tail. Not long after, Thrun cofounded Udacity to commercialize MOOCs. He predicted that in 50 years, streaming lectures would so subvert face-to-face education that only 10 higher-education institutions would remain. Our quaint campuses would become obsolete, replaced by star faculty streaming lectures on computer screens all over the world. Thrun and other MOOC evangelists imagined they had inspired a revolution, overthrowing a thousand years of classroom teaching.

These MOOC pioneers were therefore stunned when their online courses didn’t perform anything like they had expected. At first, the average completion rate for MOOCs was less than 7 percent. Completion rates have since gone up a bit, to a median of about 12.6 percent, although there’s considerable variation from course to course. While a number of factors contribute to the completion rate, my own observation is that students who have to pay a fee to enroll tend to be more committed to finishing the course. 

Read More
schematic of new BICSEL laser

Supremely Small BICSEL Laser Traps Light in Open Air

 Tapping into an idea from quantum mechanics that dates back to the Jazz Age, researchers have created a new type of laser that could be much tinier than conventional lasers, potentially leading to faster optical communications and more powerful computers.

The laser relies on a phenomenon known as bound states in the continuum (BICs), which allows researchers to build a laser cavity in open air. “It’s not every day that you have the possibility to make a new type of laser,” says Boubacar Kante, a professor of electrical and computer engineering at the University of California, San Diego, who with his colleagues described their laser in this week’s issue of Nature.

Read More
An editor wears a set of VR googles while sitting in front of an ominous large metal box

CES 2017: AxonVR Brings Touch and Temperature to VR

This is my tenth year at CES. Every year, I spend much of the show wondering why I put myself through it. And then, maybe once per show, I get reminded of why I’m so lucky to be here doing what I do.

Last night, AxonVR reminded me that technology can be absolutely magical when a tiny virtual deer that took a warm and fluffy nap on my outstretched palm.

We use the term virtual reality to refer to headsets like the Oculus Rift and the HTC Vive: they make it look, and sound, like you’re in a virtual world. As much of an achievment as it is to fool our sight and hearing, we have other senses that we use to experience reality. The vast majority of VR systems completely ignore these senses, like touch, because they’re even more challenging to replicate: I mean, how the heck could you possibly create an arbitrary touch sensation instantly on demand? I have no idea. But AxonVR does.

This monster is AxonVR’s HaptX system. It’s an enormous metal box with some buttons and blinking lights, and it occasionally makes ominous noises. In case you can’t tell from the way it looks, it’s very much a prototype. You put on a HTC Vive headset, and then stick your left hand into that black protruding rectangle at the front, palm up.

Both IEEE Spectrum editor Amy Nordrum and I tried this thing out, and we really had no idea what to expect: AxonVR wouldn’t tell us exactly what was going to happen beforehand, almost certainly so they could watch us totally freak out, because we totally freaked out. Or at least, I did; Amy was much more professional about the whole thing.

Here’s how she describes the experience:

I poke an apple and bring it over to drop it on my left hand. Immediately, I feel the familiar shape of an apple in my hand. It's round and smooth on every side, just like it looks. Neat.

Next, I select a nondescript cube and place it in my palm. I feel its slight weight. Then, it starts to change colors. First it’s red, and my hand feels warmer. Then it’s blue and my hand is cooler. The change in color and the change in sensation happens simultaneously. I wasn't expecting to feel a temperature gradient through this haptic VR system, and it’s a pleasant surprise.

I see a grenade on the table and drop it on my hand. The grenade erupts and spews flames down, engulfing my palm and fingers. In addition to feeling the heat, I feel that heat come in waves as the flames spread over my hand.

The obvious next choice is a large piece of ice waiting helpfully on the table. It settles in my hand with a soothing chill. I move it across my fingers and over my palm, and the sensation follows with no detectable latency.

Then, I select an object I’ve been avoiding— a spider, with its angled legs and erratic movements. The lightness of the spider’s legs, spread over the eight points of its wide stance, immediately transfers to my palm. Then, the spider starts to move. It picks up its front two legs and I feel a difference in sensation, with perhaps slightly more weight at the other six points. The spider shuffles around and I feel its many legs, like eight little pencil points pressing lightly on my palm.

Lastly, I select a miniature deer, beige with a large rack of black antlers, that has been resting off to the side since I began the demo. I drop the deer into my hand and feel its hooves plant firmly in my palm. The sensation is not at all like the spider, and it shouldn’t be, with more heft behind each hoof. The deer fidgets and I feel its weight shift in my palm as it turns, and the stroke of a hoof when it starts to paw at my hand.

Then, the deer starts to lay down. Its legs fold up beneath it and its furry belly rests in the center of my palm. I can feel it breathe, and the spot on my hand where it’s resting starts to warm. The experience of holding a tiny deer in my hand and feeling its hooves is a neat trick, but the ability to actually feel it breathe and sense the warmth of its furry body brings on an entirely different dimension of emotion. It feels like my pet, and I want to keep it.

The demo ends with a grand finale whereby a flying dragon alights on the top of the silver box in front of me and blows a mouthful of flames onto my hand with spectacular flair. It is a good dragon, and the flames feel sufficiently fiery. Still, it is that tiny deer that leaves the fondest impression, and leave me feeling blown away at the sensitivity and fidelity of the AxonVR haptic technology.

Somehow, this  mystery box is able to replicate tactile and thermal sensations, with force feedback. It did it all at once, and very compellingly. I spent most of the demo giggling like a little kid. The resolution of the tactile sensations was spectacular, from the edges of a cube, to the roundness of an apple, to the tiny prancing hooves of the deer. And when the deer tucked its little hooves underneath itself laid down, I could feel how warm and soft its belly was. The warmth was immediate, just like you’d expect, and it was the same with the dragon blowing fire, or the ice cube sliding around my palm: there wasn’t any noticeable latency, and the thermal sensations were powerful enough to be completely convincing. Ice felt like ice feels, and fire feels like you’d imagine fire feeling if you could hold it without actually burning yourself.

Saying that AxonVR has come up with a convincing simulation would be underselling what the HaptX experience is like. Other haptic systems we’ve tried require some (or occasionally lots of) imagination to transform a generalized touch sensation into something that matches what you’re seeing inside of your VR headset. With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary: or rather, I can’t imagine how anything short of a full-on Star Trek holodeck would be able to make a tiny deer that feels any more real than the one that napped on my palm last night.

We’re as curious as you probably are about what, exactly, is inside that enormous box. Personally, I think it’s suspicious that the box is just large enough to contain an apple, an ice cube, a very large spider, a very small deer, and a dragon, because that would explain how it works as well as it does. AxonVR mentioned something about “an array of microfluidic actuators,” but we’ve been promised more details once things calm down a bit after CES.

While what we tried out last night is certainly futuristic all by itself, AxonVR’s website promises even more, including an entire haptic exoskeleton that allows you to feel force over your entire body. This video from last April shows a bonkers concept image of what their final system might look like, along with a general description of the tech and some behind the scenes on the hardware itself:

AxonVR closed a $5.8 million seed round of funding last month, which is a lot, although not a surprising amount assuming they let prospective investors spend five minutes with a hand in that magic box. The company is looking at several different potential markets, including design and manufacturing, training and simulation, and location-based entertainment. That last one is where you’re most likely to be able to experience this for yourself: AxonVR’s press materials suggest that the company is not imagining you would have their system in your home, but more likely, you’ll find it at arcades or theme parks. There’s no timeline for any of this quite yet, but AxonVR is working on partnerships, and we’re expecting additional announcements later this year.

We can’t wait.

A Sony Android smartphone, a Mi Mix smartphone, and an iPhone placed side by side, showing the larger screen of the Mi Mix

CES 2017: Little-Known Elliptic Labs Could Reshape the Smartphone Industry

This year at CES, a little-known company with no booth or speaker slot is quietly talking about a technology that could bring about one of the most visible changes to smartphone design of 2017. It has developed software that would let manufacturers remove a common component and so create almost-edgeless smartphone screens that run up to the very top of the device. And in the past three months, this company has been contacted by every major smartphone manufacturer in the world.

Let’s back up for a moment. On your smartphone right now, there’s probably a little dot or narrow sliver right above your screen that’s a proximity sensor. When you make a call and place the phone to your ear, the screen turns off to save power and prevent you from accidentally hitting buttons with your cheek. It does this by emitting infrared waves and then measuring their reflections to determine how close the phone is to your head.

The original iPhone, released in 2007, was the first smartphone to use a proximity sensor. Since then, the proximity sensor has become a standard mobile component. It’s typically housed in a rectangular bit of casing that takes up about half a centimeter or so of real estate across the top of the screen.

But this staple of smartphone design may be on its way out. In October 2016, Xiaomi announced the Mi MIX, which features a display that runs right up to the top edge of the device, with no proximity sensor in sight. Reviewers raved about the almost-edgeless display and applauded Xiaomi’s ingenuity. Some even hinted that Apple and Samsung would be taking cues from the Mi MIX for their highly anticipated releases of the iPhone 8 and Galaxy S8 in 2017.

Though Xiaomi got the credit, the company behind that almost-edgeless display was Elliptic Labs. And according to an Elliptic representative who spoke with IEEE Spectrum here at CES in Las Vegas, consumers should expect to see many more almost-edgeless smartphone screens debut this year, all inspired by the Mi MIX design.

“Ever since this phone has been released, all the mobile OEMs [original equipment manufacturers] have been contacting us,” said Angelo Assimakopoulos, VP of sales and business development for Elliptic Labs. “Without naming names, I can tell you almost everyone is going this route.”

Elliptic Labs sells software that generates ultrasound pulses from a smartphone’s speaker and measures them using its microphone, as an alternative method to proximity detection. This technique allows manufacturers to get rid of traditional proximity sensors altogether, and so stretch the display all the way to the phone’s upper edge. Ultrasound may also prove more reliable for consumers, since proximity sensors can be affected by smudges or bright light.

The pulses that Elliptic generates are between 30 kilohertz to 40 kHz, well above the 20 kHz limit of human hearing. Assimakopoulos says most smartphone speakers can generate pulses at these frequencies, though the company finds that MEMS speakers work best for generating them.

The company, which has about 30 employees and operations in the United States, Norway, and China, has worked on its software for almost 10 years. Now, they think conditions are favorable to roll it out. Consumers are watching more videos and playing more games on their smartphones than ever before. In response, manufacturers are producing phones with larger screens and maximizing every square millimeter of space. For example, Samsung has increasingly displayed content along the sides of screens. An obvious next step may be to expand the screen from the phone’s top to bottom, with Elliptic’s help.

By using their software, Elliptic’s Assimakopoulos estimates that smartphone manufacturers could increase screen area to between 90 to 95 percent of the phone’s facing side, from the standard 75 to 80 percent on most smartphones today. The Mi MIX has a screen-to-face radio of 91.3 percent.

Elliptic’s technology alone won’t allow manufactures to create entirely edgeless (or, as they’re known in the industry, “bezel-free”) designs, though. Aside from the proximity sensor, several other components such as the home button, camera, and speaker still live on the face of most smartphones.

It’s hard to say whether Elliptic’s technology would lower the cost of manufacturing a smartphone, or add to it. On one hand, it replaces the proximity sensor with software that uses existing hardware. This also frees up some space inside the phone, since the proximity sensor, which itself measures approximately 3 by 2 millimeters, can be removed. On the other hand, manufacturers must pay to license the software, and installing a larger liquid-crystal display adds to a phone’s cost.

Now that Elliptic is drawing attention from manufacturers, Assimakopoulos hopes to work with as many partners as possible by licensing their patented software. He declined to comment on whether Elliptic might also be a candidate for acquisition, which would allow one company to keep their system from competitors. “We’re talking to several OEMs,” he said. “I will tell you that all OEMs have approached us inquiring about our technology since this phone was released.”

Though Assimakopoulos was hush-hush on the details of his conversations with manufacturers, he’s very confident that you’ll be seeing more almost-edgeless smartphone screens very soon. “There will be another phone that shows up this year with this technology,” he said.   

A rounded white robot that look like it has a giant eye with a black pupil and white iris

CES 2017: Why Every Social Robot at CES Looks Alike

In the middle of all of the autonomous car promises, slightly thinner and brighter televisions, and appliances that spy on you in as many different ways as they possibly can were a small handful of social robots. These are robots designed to interact with you at home.

People responding to IEEE Spectrum’s live twitter feeds as we covered each announcement, pointed out that these little white social home robots all look kinda similar to each other, and they also look kinda similar to that little white social home robot that managed to raise $3.7 million on Indiegogo in September of 2014: Jibo.

To show what we’re talking about (if you haven't been following along with our CES coverage, and you totally should be), here are three new social home robots (Kuri, Mykie, and Hub) that were announced Wednesday, along with Jibo for comparison.

White. Curvy and smooth. Big heads on small bodies. An eye or eyes, but no ears or mouth, and no arms. A lot of design similarities with what is arguably the very first social home robot to (promise to) be commercially available (eventually):

The question, though, is just why exactly these smooth roundish curvy big-headed white robots all look the way that they do. Why do they look a bit like Jibo, and why does Jibo look the way it does?

“We designed a very clean, modern looking robot that’s friendly,” Jibo’s VP of Marketing Nancy Dussault-Smith told me yesterday. “I can understand why people want to have that kind of thing in their homes.”

Kaijen Hsiao and Sarah Osentoski, Mayfield Robotics’ COO and CTO, told us something very similar about their robot, Kuri: “People are very picky about what goes in their homes,” says Hsiao. “It’s very hard to build something that matches everyone's decor, and the closest you can come is very minimalist and white. Also, if you want to hide sensors, windows that are transparent to IR are generally black, which is why you see robots with so much black.”

The robots all tend to be smooth and curvy not just because it’s pleasing to the eye (conveying softness with organic and symmetrical shapes), but also because it's safer, especially with a robot that moves or that that you're supposed to interact with touch. And round heads are the easiest to move up and down and rotate while also concealing the mechanical joints and electronics inside.

The specific proportion between the head and the body was, for Jibo, a very carefully thought out design decision, said Dussault-Smith. Jibo’s head is oversized because it’s intended to be somewhat reminiscent of the cuteness of baby animals (humans included), which have disproportionately large heads. For Kuri, practical issues also came into play: the robot needed to be a certain height in order to provide a decent view of your home through its eye camera, which helped define the size of the head and the base needed to keep the robot stable.

Jibo and Kuri also have substantially different philosophies when it comes to eyes. “Our original idea was to have a small screen that had eyes, and we were doing all of these crazy things to try to hide the rest of the screen,” Osentoski told us. “We had decided early on character wise that if you show anything but the eyes on the screen, you destroy the character, because it’s not a face anymore,” continued Hsiao. “Finally, I said, ‘if we only want the screen to show eyes, why don't we just make physical eyes?’”

Meanwhile, “Jibo’s one eye was a very deliberate choice,” said Dussault-Smith. “Two eyes caught you a little in the uncanny valley; it felt a little too real. One eye was still able to have that communication, but without as much of the intimidation of it being like a person.” And Jibo, of course, has a screen that can display all kinds of other information as well.

The struggle to keep robots from being unconsciously anthropomorphized and then failing to live up to human-like expectations is another major driver of social robot design. This is where much of the minimalism comes from— avoiding human features as much as possible, especially around the robot’s face, helps to prevent users from feeling like that the robot they’re interacting with should respond the way a human would. At this point, robots that try too hard to seem human can only disappoint.

There are some very good reasons why the robots that people like and are comfortable with tend to share design characteristics. Being white helps them fit in with other decor. Being smooth and round helps them be safe. Minimalist faces help keep expectations in check, while round heads are the simplest to build. We're going to see a lot more robots like this, especially if Kuri, Mykie, Hub, and Jibo turn out to be commercially successful. What I think is more interesting than focusing on how similar they are, is to instead look at why they’re different, and what those differences mean about how those robots will interact with us.

Fundamentally, as Jibo’s Nancy Dussault-Smith points out, “what really differentiates robots is what's on the inside.” Even if all of these social home robots really did look exactly the same, they're intended to do different things in different ways. Maybe some will be more successful than others, or maybe they'll all find their own niches: none of them are for sale yet, so it's much too early to tell, but we're definitely looking forward to finding out.

A hand flexing between its fingers a clear plastic sheet with a dark rectangle in the center

CES 2017: Panasonic Shows Off a Bendable Lithium-ion Battery for IoT, Wearables

Most wireless electronic devices look the way they do because of their batteries. Batteries are bulky and rigid components that can’t easily be modified or eliminated. So when product developers design a new device, the battery is often the limiting factor in determining size, shape, and flexibility.

But this year at CES in Las Vegas, Panasonic is showing off a new type of lithium-ion battery that could change that. The razor-thin silver wafer can be twisted or bent 1,000 times and still maintain 80 percent of its capacity. Panasonic presented three versions of the new battery at CES: Each is slightly smaller than a credit card and bendy enough to fit around a soda can’s contours. 

The project has been under way since 2008, but Panasonic just started talking about it in September. Yoriko Yagi, assistant chief of planning in Panasonic’s wearable energy department, said the battery is now ready for mass production, which she expects will begin sometime between April 2018 and March 2019.

“Our small devices are increasing, like wearables and IoT devices, and the product design depends on battery size,” Yagi said. “If we want small devices, we need small batteries.”

Panasonic’s bendable battery, which is just 0.45 millimeter thick, is relatively low capacity. The largest version, known as CG-064065, has a maximum capacity of 60 milliampere hours (mAh), and the smallest comes in at 17.5 mAh. For comparison, the largest smartphone batteries boast around 3,500 mAh.

That means the new battery is best suited for wearables, cardlike devices, and Internet of Things applications. The company began providing samples to potential clients in October, but it has not yet publicly stated a price for the battery.

In the future it may be possible to scale up its design to create a flexible smartphone or tablet. “Basically, we can do that, but we’re not targeting on that product,” Yagi says. Lest we forget, high-capacity lithium-ion batteries, like the ones found in smartphones, are prone to fires and explosions (RIP, Galaxy Note 7). Safety concerns have led Panasonic to focus on wearables and IoT devices and their low energy needs, for now.

Plenty of other companies and researchers are working on flexible lithium-ion batteries, but Yagi said Panasonic’s version offers the best performance and most rigorous testing record to date.

To create it, the company had to rethink some classic elements of battery design. Lithium-ion batteries are made up of an anode—a positively charged electrode, made of lithium oxide—as well as a cathode, which is a negatively charged electrode, made of graphite. These two layers are separated by an electrolyte, which is a liquid or gel substance full of ions. As a battery is charged, lithium ions pass through this solution and are stored in the cathode. When the battery is powering a circuit, the ions pass back through the electrolyte to the anode.

Most of the time in lithium-ion batteries, the individual cells where all of this happens are cylindrical in shape, with the anode and cathode layers wrapped around each other. Bending or twisting the cylinder causes the outermost layer to move further relative to the innermost layer. As a result, the electrodes lose their alignment, and the battery capacity erodes over time.

To make a flexible version, Panasonic researchers decided to ditch the cylinder approach and stack the electrodes right on top of each other in a slim rectangular wafer, rather than wrapping them around one another. Then they enclosed the battery in a proprietary flexible casing made of a secret aluminum compound.

How to charge the battery was another challenge. The battery must be recharged wirelessly because Panasonic didn’t want to make room for, say, a thick USB cable port on its otherwise slim form. Unfortunately, most wireless chargers on the market deliver so much power that it would overwhelm the capacity of the tiny device, which can handle a maximum charging current of just 60 milliamperes.

In the end, Panasonic built its own wireless charging stations, which it will sell along with the battery to customers. Yagi said a single charge may last for about four weeks if the battery is used for a relatively simple application such as counting steps in a fitness monitor. With CG-064065 and its siblings poised for mass production, Yagi is looking forward to exploring that possibility, and many more. “I think there’s a lot of potential in the future for this kind of battery,” she said.

Editor’s Note: This article has been updated to reflect a corrected thickness of the battery and date range for when the battery will enter mass production by Panasonic.

A man with a goatee holds a thick pair of opaque, rounded, glasses up to his eyes

CES 2017: The Year of Voice Recognition

The beeping, flashing, pulsating glory of the world’s largest consumer electronics trade show has returned to Las Vegas. The first batch of new products and services went on display at CES on Tuesday, and startups and industry giants will debut more gadgets and technologies throughout the week.

Just a few of the curious wares spotted by IEEE Spectrum editors last night include a battery-powered scarf that filters air pollution, a hairbrush that uses sound waves to analyze dryness and frizz, a smart cane that detects falls, and a connected cat feeder that avoids overfeeding by recognizing felines by implanted microchips. Also, a US $120 camera that lets you stare at the inside of your refrigerator, should you ever choose to do that (assuming the milk isn’t blocking the view).

Major technology companies have also begun to make their announcements about new products they will launch in 2017. Qualcomm released its newest chip, the Snapdragon 835, which, rumor has it, could turn up in Samsung Galaxy 8 smartphones later this year. Huawei said its newest Honor smartphone, called the 6X, which boasts a battery life of 2.1 days and costs only $250, is now available in the United States. And Faraday Future unveiled its long-awaited self-parking FF 91 electric car, which integrates more than 30 sensors including cameras and a retractable lidar system to navigate into a parking space all on its own.

Looking at deeper trends, several experts said the most meaningful long-term developments will come from the companies scraping away at voice recognition. Once we master it, they believe, voice-recognition capabilities will fundamentally change the way we interact with and build electronics.

This was a strong element of Tuesday’s analysis of the global consumer market by Shawn DuBravac, chief economist, and Steve Koenig, senior director for market research, of the Consumer Technology Association (CTA), which runs CES.  In DuBravac’s opinion, voice-recognition technology has improved enough in the past few years that it is now poised to usher in an era of so-called faceless computing.

In particular, the word error rate for voice-recognition systems dropped from 43 percent in 1995 to just 6.3 percent this year, and is now on par with humans. “We have seen more progress in this technology in the last 30 months than we saw in the first 30 years,” DuBravac said. Another analyst attending CES that I spoke to was Ronan de Renesse, a consumer technology analyst for the business intelligence firm Ovum, who said he was watching a startup called Voicebox, which has worked on voice recognition for partners including Samsung, AT&T, and Toyota.

In addition to redefining the traditional computer interfaces, voice recognition could improve a host of products that are already on the market. CTA estimates total sales of voice-activated digital assistants such as Google Home or Amazon Echo to be about 5 million units to date, and expects that to double to 10 million in 2017. With all of these products, clarity and functionality are key. DuBravac figures there are currently about 1,500 apps (called “skills” in Amazon-speak) that can interact with Alexa, Amazon’s voice-activated personality and says he would not be surprised to see 700 new ones announced just this year at CES.

Aside from voice recognition, de Renesse also thinks that virtual reality and augmented reality will be “at the forefront of CES” again this year. These technologies had a flagship 2016 with the release of the HTC Vive and Oculus Rift headsets, but some have since complained that the technology isn’t catching on as quickly as they’d hoped. One reason could be that there’s still a profound lack of high-quality VR and AR content to enjoy for those who do shell out $600 or more for a headset.

DuBravac says these criticisms are partly a symptom of too-high initial expectations for VR, and not necessarily a reflection of the technology itself. He’s still optimistic, however, because he sees companies investing in VR content. As for his assessment of the progress made in 2016: “If you had realistic expectations about what would happen and the deployment of hardware would look like, then I think you saw a market starting to take hold,” he says.

Anshel Sag, an analyst at Moor Insights & Strategy, is also frustrated by the proclamations that VR is struggling. Even though he doesn’t expect any major VR announcements at CES 2017, he says that’s because the product-release cycles of VR companies simply didn’t sync up with CES this year. But he cautions anyone from reading too much into this.

Nonetheless, several headset manufacturers and content developers are planning to put their best foot forward at the show. Samsung will continue to push mobile VR, which operates on less expensive headsets, such as the $60 Samsung Gear, that allow you to insert your smartphone to stream VR. Sag has also been impressed by a company called ODG, which is working on a pair of heavy-duty eyeglasses that convert from viewing in AR to VR.

Funnily enough, CES might also be at least partly to blame for VR criticisms. Every year at the show, analysts and journalists try to predict the new fads and hottest products that will redefine consumer technology as we know it. Too often, they are surprised when those trends fail to materialize or reach the adoption rates they had expected.

The truth is that breakout tech stars are a relatively rare sight, even at CES. In fact, the vast majority of global consumer tech spending—80 percent—goes toward just seven types of products. The CTA’s Steve Koenig calls those technologies the “magnificent seven.” That includes smartphones, laptops, tablets, desktops, digital cameras, TVs, and smart watches (a recent addition as the Apple iWatch outpaced the iPhone in first-year sales).

On their own, smartphones account for a staggering 47 percent of global consumer spending on technology and remain the “center of the consumer tech universe,” as Koenig puts it, with their own ecosystem of apps and services. But to be fair, they were also first released more than a decade ago.

Rather than looking at everything through the lens of mass adoption, DuBravac says the market for most tech products is actually very fragmented. As an example, he points to the wearables market and the $125 VERTbelt for athletes that measures their jumps during practice and games. In his assessment, lots of startups will offer products for a specific use and find plenty of customers without ever reaching mass adoption. And that’s fine, too.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More