Tech Talk iconTech Talk

schematic of new BICSEL laser

Supremely Small BICSEL Laser Traps Light in Open Air

 Tapping into an idea from quantum mechanics that dates back to the Jazz Age, researchers have created a new type of laser that could be much tinier than conventional lasers, potentially leading to faster optical communications and more powerful computers.

The laser relies on a phenomenon known as bound states in the continuum (BICs), which allows researchers to build a laser cavity in open air. “It’s not every day that you have the possibility to make a new type of laser,” says Boubacar Kante, a professor of electrical and computer engineering at the University of California, San Diego, who with his colleagues described their laser in this week’s issue of Nature.

Read More
An editor wears a set of VR googles while sitting in front of an ominous large metal box

CES 2017: AxonVR Brings Touch and Temperature to VR

This is my tenth year at CES. Every year, I spend much of the show wondering why I put myself through it. And then, maybe once per show, I get reminded of why I’m so lucky to be here doing what I do.

Last night, AxonVR reminded me that technology can be absolutely magical when a tiny virtual deer that took a warm and fluffy nap on my outstretched palm.

We use the term virtual reality to refer to headsets like the Oculus Rift and the HTC Vive: they make it look, and sound, like you’re in a virtual world. As much of an achievment as it is to fool our sight and hearing, we have other senses that we use to experience reality. The vast majority of VR systems completely ignore these senses, like touch, because they’re even more challenging to replicate: I mean, how the heck could you possibly create an arbitrary touch sensation instantly on demand? I have no idea. But AxonVR does.

This monster is AxonVR’s HaptX system. It’s an enormous metal box with some buttons and blinking lights, and it occasionally makes ominous noises. In case you can’t tell from the way it looks, it’s very much a prototype. You put on a HTC Vive headset, and then stick your left hand into that black protruding rectangle at the front, palm up.

Both IEEE Spectrum editor Amy Nordrum and I tried this thing out, and we really had no idea what to expect: AxonVR wouldn’t tell us exactly what was going to happen beforehand, almost certainly so they could watch us totally freak out, because we totally freaked out. Or at least, I did; Amy was much more professional about the whole thing.

Here’s how she describes the experience:

I poke an apple and bring it over to drop it on my left hand. Immediately, I feel the familiar shape of an apple in my hand. It's round and smooth on every side, just like it looks. Neat.

Next, I select a nondescript cube and place it in my palm. I feel its slight weight. Then, it starts to change colors. First it’s red, and my hand feels warmer. Then it’s blue and my hand is cooler. The change in color and the change in sensation happens simultaneously. I wasn't expecting to feel a temperature gradient through this haptic VR system, and it’s a pleasant surprise.

I see a grenade on the table and drop it on my hand. The grenade erupts and spews flames down, engulfing my palm and fingers. In addition to feeling the heat, I feel that heat come in waves as the flames spread over my hand.

The obvious next choice is a large piece of ice waiting helpfully on the table. It settles in my hand with a soothing chill. I move it across my fingers and over my palm, and the sensation follows with no detectable latency.

Then, I select an object I’ve been avoiding— a spider, with its angled legs and erratic movements. The lightness of the spider’s legs, spread over the eight points of its wide stance, immediately transfers to my palm. Then, the spider starts to move. It picks up its front two legs and I feel a difference in sensation, with perhaps slightly more weight at the other six points. The spider shuffles around and I feel its many legs, like eight little pencil points pressing lightly on my palm.

Lastly, I select a miniature deer, beige with a large rack of black antlers, that has been resting off to the side since I began the demo. I drop the deer into my hand and feel its hooves plant firmly in my palm. The sensation is not at all like the spider, and it shouldn’t be, with more heft behind each hoof. The deer fidgets and I feel its weight shift in my palm as it turns, and the stroke of a hoof when it starts to paw at my hand.

Then, the deer starts to lay down. Its legs fold up beneath it and its furry belly rests in the center of my palm. I can feel it breathe, and the spot on my hand where it’s resting starts to warm. The experience of holding a tiny deer in my hand and feeling its hooves is a neat trick, but the ability to actually feel it breathe and sense the warmth of its furry body brings on an entirely different dimension of emotion. It feels like my pet, and I want to keep it.

The demo ends with a grand finale whereby a flying dragon alights on the top of the silver box in front of me and blows a mouthful of flames onto my hand with spectacular flair. It is a good dragon, and the flames feel sufficiently fiery. Still, it is that tiny deer that leaves the fondest impression, and leave me feeling blown away at the sensitivity and fidelity of the AxonVR haptic technology.

Somehow, this  mystery box is able to replicate tactile and thermal sensations, with force feedback. It did it all at once, and very compellingly. I spent most of the demo giggling like a little kid. The resolution of the tactile sensations was spectacular, from the edges of a cube, to the roundness of an apple, to the tiny prancing hooves of the deer. And when the deer tucked its little hooves underneath itself laid down, I could feel how warm and soft its belly was. The warmth was immediate, just like you’d expect, and it was the same with the dragon blowing fire, or the ice cube sliding around my palm: there wasn’t any noticeable latency, and the thermal sensations were powerful enough to be completely convincing. Ice felt like ice feels, and fire feels like you’d imagine fire feeling if you could hold it without actually burning yourself.

Saying that AxonVR has come up with a convincing simulation would be underselling what the HaptX experience is like. Other haptic systems we’ve tried require some (or occasionally lots of) imagination to transform a generalized touch sensation into something that matches what you’re seeing inside of your VR headset. With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary: or rather, I can’t imagine how anything short of a full-on Star Trek holodeck would be able to make a tiny deer that feels any more real than the one that napped on my palm last night.

We’re as curious as you probably are about what, exactly, is inside that enormous box. Personally, I think it’s suspicious that the box is just large enough to contain an apple, an ice cube, a very large spider, a very small deer, and a dragon, because that would explain how it works as well as it does. AxonVR mentioned something about “an array of microfluidic actuators,” but we’ve been promised more details once things calm down a bit after CES.

While what we tried out last night is certainly futuristic all by itself, AxonVR’s website promises even more, including an entire haptic exoskeleton that allows you to feel force over your entire body. This video from last April shows a bonkers concept image of what their final system might look like, along with a general description of the tech and some behind the scenes on the hardware itself:

AxonVR closed a $5.8 million seed round of funding last month, which is a lot, although not a surprising amount assuming they let prospective investors spend five minutes with a hand in that magic box. The company is looking at several different potential markets, including design and manufacturing, training and simulation, and location-based entertainment. That last one is where you’re most likely to be able to experience this for yourself: AxonVR’s press materials suggest that the company is not imagining you would have their system in your home, but more likely, you’ll find it at arcades or theme parks. There’s no timeline for any of this quite yet, but AxonVR is working on partnerships, and we’re expecting additional announcements later this year.

We can’t wait.

A Sony Android smartphone, a Mi Mix smartphone, and an iPhone placed side by side, showing the larger screen of the Mi Mix

CES 2017: Little-Known Elliptic Labs Could Reshape the Smartphone Industry

This year at CES, a little-known company with no booth or speaker slot is quietly talking about a technology that could bring about one of the most visible changes to smartphone design of 2017. It has developed software that would let manufacturers remove a common component and so create almost-edgeless smartphone screens that run up to the very top of the device. And in the past three months, this company has been contacted by every major smartphone manufacturer in the world.

Let’s back up for a moment. On your smartphone right now, there’s probably a little dot or narrow sliver right above your screen that’s a proximity sensor. When you make a call and place the phone to your ear, the screen turns off to save power and prevent you from accidentally hitting buttons with your cheek. It does this by emitting infrared waves and then measuring their reflections to determine how close the phone is to your head.

The original iPhone, released in 2007, was the first smartphone to use a proximity sensor. Since then, the proximity sensor has become a standard mobile component. It’s typically housed in a rectangular bit of casing that takes up about half a centimeter or so of real estate across the top of the screen.

But this staple of smartphone design may be on its way out. In October 2016, Xiaomi announced the Mi MIX, which features a display that runs right up to the top edge of the device, with no proximity sensor in sight. Reviewers raved about the almost-edgeless display and applauded Xiaomi’s ingenuity. Some even hinted that Apple and Samsung would be taking cues from the Mi MIX for their highly anticipated releases of the iPhone 8 and Galaxy S8 in 2017.

Though Xiaomi got the credit, the company behind that almost-edgeless display was Elliptic Labs. And according to an Elliptic representative who spoke with IEEE Spectrum here at CES in Las Vegas, consumers should expect to see many more almost-edgeless smartphone screens debut this year, all inspired by the Mi MIX design.

“Ever since this phone has been released, all the mobile OEMs [original equipment manufacturers] have been contacting us,” said Angelo Assimakopoulos, VP of sales and business development for Elliptic Labs. “Without naming names, I can tell you almost everyone is going this route.”

Elliptic Labs sells software that generates ultrasound pulses from a smartphone’s speaker and measures them using its microphone, as an alternative method to proximity detection. This technique allows manufacturers to get rid of traditional proximity sensors altogether, and so stretch the display all the way to the phone’s upper edge. Ultrasound may also prove more reliable for consumers, since proximity sensors can be affected by smudges or bright light.

The pulses that Elliptic generates are between 30 kilohertz to 40 kHz, well above the 20 kHz limit of human hearing. Assimakopoulos says most smartphone speakers can generate pulses at these frequencies, though the company finds that MEMS speakers work best for generating them.

The company, which has about 30 employees and operations in the United States, Norway, and China, has worked on its software for almost 10 years. Now, they think conditions are favorable to roll it out. Consumers are watching more videos and playing more games on their smartphones than ever before. In response, manufacturers are producing phones with larger screens and maximizing every square millimeter of space. For example, Samsung has increasingly displayed content along the sides of screens. An obvious next step may be to expand the screen from the phone’s top to bottom, with Elliptic’s help.

By using their software, Elliptic’s Assimakopoulos estimates that smartphone manufacturers could increase screen area to between 90 to 95 percent of the phone’s facing side, from the standard 75 to 80 percent on most smartphones today. The Mi MIX has a screen-to-face radio of 91.3 percent.

Elliptic’s technology alone won’t allow manufactures to create entirely edgeless (or, as they’re known in the industry, “bezel-free”) designs, though. Aside from the proximity sensor, several other components such as the home button, camera, and speaker still live on the face of most smartphones.

It’s hard to say whether Elliptic’s technology would lower the cost of manufacturing a smartphone, or add to it. On one hand, it replaces the proximity sensor with software that uses existing hardware. This also frees up some space inside the phone, since the proximity sensor, which itself measures approximately 3 by 2 millimeters, can be removed. On the other hand, manufacturers must pay to license the software, and installing a larger liquid-crystal display adds to a phone’s cost.

Now that Elliptic is drawing attention from manufacturers, Assimakopoulos hopes to work with as many partners as possible by licensing their patented software. He declined to comment on whether Elliptic might also be a candidate for acquisition, which would allow one company to keep their system from competitors. “We’re talking to several OEMs,” he said. “I will tell you that all OEMs have approached us inquiring about our technology since this phone was released.”

Though Assimakopoulos was hush-hush on the details of his conversations with manufacturers, he’s very confident that you’ll be seeing more almost-edgeless smartphone screens very soon. “There will be another phone that shows up this year with this technology,” he said.   

A rounded white robot that look like it has a giant eye with a black pupil and white iris

CES 2017: Why Every Social Robot at CES Looks Alike

In the middle of all of the autonomous car promises, slightly thinner and brighter televisions, and appliances that spy on you in as many different ways as they possibly can were a small handful of social robots. These are robots designed to interact with you at home.

People responding to IEEE Spectrum’s live twitter feeds as we covered each announcement, pointed out that these little white social home robots all look kinda similar to each other, and they also look kinda similar to that little white social home robot that managed to raise $3.7 million on Indiegogo in September of 2014: Jibo.

To show what we’re talking about (if you haven't been following along with our CES coverage, and you totally should be), here are three new social home robots (Kuri, Mykie, and Hub) that were announced Wednesday, along with Jibo for comparison.

White. Curvy and smooth. Big heads on small bodies. An eye or eyes, but no ears or mouth, and no arms. A lot of design similarities with what is arguably the very first social home robot to (promise to) be commercially available (eventually):

The question, though, is just why exactly these smooth roundish curvy big-headed white robots all look the way that they do. Why do they look a bit like Jibo, and why does Jibo look the way it does?

“We designed a very clean, modern looking robot that’s friendly,” Jibo’s VP of Marketing Nancy Dussault-Smith told me yesterday. “I can understand why people want to have that kind of thing in their homes.”

Kaijen Hsiao and Sarah Osentoski, Mayfield Robotics’ COO and CTO, told us something very similar about their robot, Kuri: “People are very picky about what goes in their homes,” says Hsiao. “It’s very hard to build something that matches everyone's decor, and the closest you can come is very minimalist and white. Also, if you want to hide sensors, windows that are transparent to IR are generally black, which is why you see robots with so much black.”

The robots all tend to be smooth and curvy not just because it’s pleasing to the eye (conveying softness with organic and symmetrical shapes), but also because it's safer, especially with a robot that moves or that that you're supposed to interact with touch. And round heads are the easiest to move up and down and rotate while also concealing the mechanical joints and electronics inside.

The specific proportion between the head and the body was, for Jibo, a very carefully thought out design decision, said Dussault-Smith. Jibo’s head is oversized because it’s intended to be somewhat reminiscent of the cuteness of baby animals (humans included), which have disproportionately large heads. For Kuri, practical issues also came into play: the robot needed to be a certain height in order to provide a decent view of your home through its eye camera, which helped define the size of the head and the base needed to keep the robot stable.

Jibo and Kuri also have substantially different philosophies when it comes to eyes. “Our original idea was to have a small screen that had eyes, and we were doing all of these crazy things to try to hide the rest of the screen,” Osentoski told us. “We had decided early on character wise that if you show anything but the eyes on the screen, you destroy the character, because it’s not a face anymore,” continued Hsiao. “Finally, I said, ‘if we only want the screen to show eyes, why don't we just make physical eyes?’”

Meanwhile, “Jibo’s one eye was a very deliberate choice,” said Dussault-Smith. “Two eyes caught you a little in the uncanny valley; it felt a little too real. One eye was still able to have that communication, but without as much of the intimidation of it being like a person.” And Jibo, of course, has a screen that can display all kinds of other information as well.

The struggle to keep robots from being unconsciously anthropomorphized and then failing to live up to human-like expectations is another major driver of social robot design. This is where much of the minimalism comes from— avoiding human features as much as possible, especially around the robot’s face, helps to prevent users from feeling like that the robot they’re interacting with should respond the way a human would. At this point, robots that try too hard to seem human can only disappoint.

There are some very good reasons why the robots that people like and are comfortable with tend to share design characteristics. Being white helps them fit in with other decor. Being smooth and round helps them be safe. Minimalist faces help keep expectations in check, while round heads are the simplest to build. We're going to see a lot more robots like this, especially if Kuri, Mykie, Hub, and Jibo turn out to be commercially successful. What I think is more interesting than focusing on how similar they are, is to instead look at why they’re different, and what those differences mean about how those robots will interact with us.

Fundamentally, as Jibo’s Nancy Dussault-Smith points out, “what really differentiates robots is what's on the inside.” Even if all of these social home robots really did look exactly the same, they're intended to do different things in different ways. Maybe some will be more successful than others, or maybe they'll all find their own niches: none of them are for sale yet, so it's much too early to tell, but we're definitely looking forward to finding out.

A hand flexing between its fingers a clear plastic sheet with a dark rectangle in the center

CES 2017: Panasonic Shows Off a Bendable Lithium-ion Battery for IoT, Wearables

Most wireless electronic devices look the way they do because of their batteries. Batteries are bulky and rigid components that can’t easily be modified or eliminated. So when product developers design a new device, the battery is often the limiting factor in determining size, shape, and flexibility.

But this year at CES in Las Vegas, Panasonic is showing off a new type of lithium-ion battery that could change that. The razor-thin silver wafer can be twisted or bent 1,000 times and still maintain 80 percent of its capacity. Panasonic presented three versions of the new battery at CES: Each is slightly smaller than a credit card and bendy enough to fit around a soda can’s contours. 

The project has been under way since 2008, but Panasonic just started talking about it in September. Yoriko Yagi, assistant chief of planning in Panasonic’s wearable energy department, said the battery is now ready for mass production, which she expects will begin sometime between April 2018 and March 2019.

“Our small devices are increasing, like wearables and IoT devices, and the product design depends on battery size,” Yagi said. “If we want small devices, we need small batteries.”

Panasonic’s bendable battery, which is just 0.45 millimeter thick, is relatively low capacity. The largest version, known as CG-064065, has a maximum capacity of 60 milliampere hours (mAh), and the smallest comes in at 17.5 mAh. For comparison, the largest smartphone batteries boast around 3,500 mAh.

That means the new battery is best suited for wearables, cardlike devices, and Internet of Things applications. The company began providing samples to potential clients in October, but it has not yet publicly stated a price for the battery.

In the future it may be possible to scale up its design to create a flexible smartphone or tablet. “Basically, we can do that, but we’re not targeting on that product,” Yagi says. Lest we forget, high-capacity lithium-ion batteries, like the ones found in smartphones, are prone to fires and explosions (RIP, Galaxy Note 7). Safety concerns have led Panasonic to focus on wearables and IoT devices and their low energy needs, for now.

Plenty of other companies and researchers are working on flexible lithium-ion batteries, but Yagi said Panasonic’s version offers the best performance and most rigorous testing record to date.

To create it, the company had to rethink some classic elements of battery design. Lithium-ion batteries are made up of an anode—a positively charged electrode, made of lithium oxide—as well as a cathode, which is a negatively charged electrode, made of graphite. These two layers are separated by an electrolyte, which is a liquid or gel substance full of ions. As a battery is charged, lithium ions pass through this solution and are stored in the cathode. When the battery is powering a circuit, the ions pass back through the electrolyte to the anode.

Most of the time in lithium-ion batteries, the individual cells where all of this happens are cylindrical in shape, with the anode and cathode layers wrapped around each other. Bending or twisting the cylinder causes the outermost layer to move further relative to the innermost layer. As a result, the electrodes lose their alignment, and the battery capacity erodes over time.

To make a flexible version, Panasonic researchers decided to ditch the cylinder approach and stack the electrodes right on top of each other in a slim rectangular wafer, rather than wrapping them around one another. Then they enclosed the battery in a proprietary flexible casing made of a secret aluminum compound.

How to charge the battery was another challenge. The battery must be recharged wirelessly because Panasonic didn’t want to make room for, say, a thick USB cable port on its otherwise slim form. Unfortunately, most wireless chargers on the market deliver so much power that it would overwhelm the capacity of the tiny device, which can handle a maximum charging current of just 60 milliamperes.

In the end, Panasonic built its own wireless charging stations, which it will sell along with the battery to customers. Yagi said a single charge may last for about four weeks if the battery is used for a relatively simple application such as counting steps in a fitness monitor. With CG-064065 and its siblings poised for mass production, Yagi is looking forward to exploring that possibility, and many more. “I think there’s a lot of potential in the future for this kind of battery,” she said.

Editor’s Note: This article has been updated to reflect a corrected thickness of the battery and date range for when the battery will enter mass production by Panasonic.

A man with a goatee holds a thick pair of opaque, rounded, glasses up to his eyes

CES 2017: The Year of Voice Recognition

The beeping, flashing, pulsating glory of the world’s largest consumer electronics trade show has returned to Las Vegas. The first batch of new products and services went on display at CES on Tuesday, and startups and industry giants will debut more gadgets and technologies throughout the week.

Just a few of the curious wares spotted by IEEE Spectrum editors last night include a battery-powered scarf that filters air pollution, a hairbrush that uses sound waves to analyze dryness and frizz, a smart cane that detects falls, and a connected cat feeder that avoids overfeeding by recognizing felines by implanted microchips. Also, a US $120 camera that lets you stare at the inside of your refrigerator, should you ever choose to do that (assuming the milk isn’t blocking the view).

Major technology companies have also begun to make their announcements about new products they will launch in 2017. Qualcomm released its newest chip, the Snapdragon 835, which, rumor has it, could turn up in Samsung Galaxy 8 smartphones later this year. Huawei said its newest Honor smartphone, called the 6X, which boasts a battery life of 2.1 days and costs only $250, is now available in the United States. And Faraday Future unveiled its long-awaited self-parking FF 91 electric car, which integrates more than 30 sensors including cameras and a retractable lidar system to navigate into a parking space all on its own.

Looking at deeper trends, several experts said the most meaningful long-term developments will come from the companies scraping away at voice recognition. Once we master it, they believe, voice-recognition capabilities will fundamentally change the way we interact with and build electronics.

This was a strong element of Tuesday’s analysis of the global consumer market by Shawn DuBravac, chief economist, and Steve Koenig, senior director for market research, of the Consumer Technology Association (CTA), which runs CES.  In DuBravac’s opinion, voice-recognition technology has improved enough in the past few years that it is now poised to usher in an era of so-called faceless computing.

In particular, the word error rate for voice-recognition systems dropped from 43 percent in 1995 to just 6.3 percent this year, and is now on par with humans. “We have seen more progress in this technology in the last 30 months than we saw in the first 30 years,” DuBravac said. Another analyst attending CES that I spoke to was Ronan de Renesse, a consumer technology analyst for the business intelligence firm Ovum, who said he was watching a startup called Voicebox, which has worked on voice recognition for partners including Samsung, AT&T, and Toyota.

In addition to redefining the traditional computer interfaces, voice recognition could improve a host of products that are already on the market. CTA estimates total sales of voice-activated digital assistants such as Google Home or Amazon Echo to be about 5 million units to date, and expects that to double to 10 million in 2017. With all of these products, clarity and functionality are key. DuBravac figures there are currently about 1,500 apps (called “skills” in Amazon-speak) that can interact with Alexa, Amazon’s voice-activated personality and says he would not be surprised to see 700 new ones announced just this year at CES.

Aside from voice recognition, de Renesse also thinks that virtual reality and augmented reality will be “at the forefront of CES” again this year. These technologies had a flagship 2016 with the release of the HTC Vive and Oculus Rift headsets, but some have since complained that the technology isn’t catching on as quickly as they’d hoped. One reason could be that there’s still a profound lack of high-quality VR and AR content to enjoy for those who do shell out $600 or more for a headset.

DuBravac says these criticisms are partly a symptom of too-high initial expectations for VR, and not necessarily a reflection of the technology itself. He’s still optimistic, however, because he sees companies investing in VR content. As for his assessment of the progress made in 2016: “If you had realistic expectations about what would happen and the deployment of hardware would look like, then I think you saw a market starting to take hold,” he says.

Anshel Sag, an analyst at Moor Insights & Strategy, is also frustrated by the proclamations that VR is struggling. Even though he doesn’t expect any major VR announcements at CES 2017, he says that’s because the product-release cycles of VR companies simply didn’t sync up with CES this year. But he cautions anyone from reading too much into this.

Nonetheless, several headset manufacturers and content developers are planning to put their best foot forward at the show. Samsung will continue to push mobile VR, which operates on less expensive headsets, such as the $60 Samsung Gear, that allow you to insert your smartphone to stream VR. Sag has also been impressed by a company called ODG, which is working on a pair of heavy-duty eyeglasses that convert from viewing in AR to VR.

Funnily enough, CES might also be at least partly to blame for VR criticisms. Every year at the show, analysts and journalists try to predict the new fads and hottest products that will redefine consumer technology as we know it. Too often, they are surprised when those trends fail to materialize or reach the adoption rates they had expected.

The truth is that breakout tech stars are a relatively rare sight, even at CES. In fact, the vast majority of global consumer tech spending—80 percent—goes toward just seven types of products. The CTA’s Steve Koenig calls those technologies the “magnificent seven.” That includes smartphones, laptops, tablets, desktops, digital cameras, TVs, and smart watches (a recent addition as the Apple iWatch outpaced the iPhone in first-year sales).

On their own, smartphones account for a staggering 47 percent of global consumer spending on technology and remain the “center of the consumer tech universe,” as Koenig puts it, with their own ecosystem of apps and services. But to be fair, they were also first released more than a decade ago.

Rather than looking at everything through the lens of mass adoption, DuBravac says the market for most tech products is actually very fragmented. As an example, he points to the wearables market and the $125 VERTbelt for athletes that measures their jumps during practice and games. In his assessment, lots of startups will offer products for a specific use and find plenty of customers without ever reaching mass adoption. And that’s fine, too.

Image of small cube satellites in space courtesy of NASA

The Tricks to Launching 100 Satellites on One Rocket

Update: This story was updated on 4 January, when ISRO increased its launch count from 83 to 103 and moved the launch date into February.

As of November, a total of 564 nanosatellites have been launched into space. In February, the Indian Space Research Organisation aims to launch a combination of 103 satellites on a single rocket—reportedly a world record. The same month, U.S. startup Spaceflight Industries plans to send up a module designed to support the launch of up to 87 satellites.

Neither the Indian Space Research Organisation (ISRO) or its commercial arm, Antrix Corporation, responded to requests for comment. But Spaceflight Industries senior mission manager Adam Hadaller described putting together launch missions for large numbers of small satellites as “herding cats…. It’s very hard.”

Once you get them in space, nano, cube, and other small-scale satellites have several applications—from monitoring weather to helping farmers decide where to water or fertilize crops—all at a significantly lower price than traditional-scale satellites. Several startups and space agencies, such as ISRO and Spaceflight Industries, are working to launch more and more of them at the same time, further reducing costs.

Launch: The first challenge begins before launch, Hadaller says. Satellites can come from different countries, and it’s necessary to check all the various safety regulations, communication licenses, and technical requirements. The different separation systems, for example, need compatible adapters. 

Then there is a choice to make: Piggyback the satellites as secondary payload on a rocket that’s already heading to space, or mount a dedicated mission? However, when piggybacking, satellites don’t have much choice in their orbits, which limits the variety of possible scientific experiments.

A SpaceX Falcon 9 rocket is set to launch a Spaceflight Industries module in February called Sherpa—containing small satellites—as secondary payload. In the mission, Falcon 9 will launch its primary payload and then deploy Sherpa after an orbital maneuver. Half an hour later, Sherpa will release its satellites.

Hadaller says that in the case of the Sherpa mission, the main limitation of the module itself is interest: As of 12 December, only 33 satellites were on the manifest for its 87-satellite vehicle.

If piggybacking isn’t needed, a dedicated launch can provide better orbital options. For example, on 12 December, Orbital ATK launched a Pegasus rocket containing eight CubeSats designed to monitor hurricane development in the tropics. The satellites deployed at a 510-kilometer altitude at an inclination of 35 degrees; over time, they spread out over the entire orbit. Their inclination gives full coverage of the tropics.

Communications: Usually, satellite owners communicate with their satellites over radio by pointing antennas on the ground at satellite locations. The better the aim, the stronger the signal, so satellite operators find their satellite’s location by using some combination of onboard GPS, trajectory estimation data, large telescope arrays, the JSpOC satellite tracker, or radio ranging.

But if all the small satellites can be identified, then radio interference can be a problem. The frequences they often use to communicate with over radio could become crowded by satellites and ground-based radios or cellphones, says Bruce Yost, who directs a NASA institute for small satellite outreach called the NASA Small Spacecraft Systems Virtual Institute.

He says one solution is to communicate at higher frequencies that are less likely to suffer interference, but this requires extra power. Another, less power-hungry solution researchers are considering is to transmit data from space to ground by laser—the drawback being that the optical link would need “even more accurate pointing” than radio communications.

Collision: Mass deployment also runs the risk of becoming a mass of space debris, some say. Spaceflight Industries says its team has not run an updated analysis of the exact probability of its satellites colliding with one another or another object in space, but Hadaller says it is “extremely low.” Also, all the tech meets international space community requirements meant to prevent debris, including deorbiting by the satellite’s 25th year. The Sherpa module itself will stay in orbit for 10 to 18 years and the satellites between 3 and 10 years, before they reenter Earth’s atmosphere and burn up.

Mike Safyan, director of Launch and Regulatory Affairs at Planet Labs, an Earth imaging company that makes small satellites, believes that the demand for launching large numbers of rockets is low for now, but “if the companies are successful, then we’ll see more of these kinds of large cluster launches.”

Yost says that there will be at least five NASA-sponsored, small, cube-shaped satellites called CubeSats on the upcoming Spaceflight Industries launch, which has been delayed from late 2016 to 2017.

“The capability of these CubeSats is really, really advancing quickly,” he says. Advancements in computer processors have made it possible to do “extensive” data processing and analysis directly on board a small satellite. Improvements in design and fabrication are also making them more robust, to better survive the harsh environment of space.

Jordi Puig-Suari, an aerospace engineer at California Polytechnic State University, in San Luis Obispo, helped design the original concept for CubeSats. “The timeline is one thing that we have to work on,” he says. “The satellites can be developed very quickly,” but getting them into space might not happen at the same speed.

But, he says, the benefits of mass deployment of small satellites are clear. “Having a larger number of lower-cost missions will allow us to go to a lot more places,” Puig-Suari says.

A blue and yellow logo spelling out CES

CES 2017: AR, VR, and IoT Will Be Hot, 3D Printing Not

This week sees the annual consumer technology extravaganza that is the CES 2017 show in Las Vegas. Once almost an afterthought, technologically speaking, consumer electronics have become increasingly important in driving the entire global tech industry. What products companies choose to bring to the show often represent an interesting tension between hard-nosed calculations and corporate wish fulfillment about the direction tech is expected to take in the coming months and years.

At CES 2017 we in the IEEE Consumer Electronics Society expect to see a reduced focus on drones compared to 2016. Drones haven't gone away, but there are few solid practical applications for most consumers. Still, small inexpensive drones could be a growth area as toys and hobby vehicles. Instead we expect to see a lot more focus on augmented reality (AR), virtual reality (VR), and home health. (And, or course, the occasional surprising and interesting product or announcement.)

There are many long- and short-form VR projects ongoing (both professional and amateur), helped by the availability of consumer versions of selfie-stick VR systems along with a variety of cameras. Social media sites and YouTube now offer 360-degree video support as a matter of course, also helping to drive adoption.

Wearables will be important, although the smart-watch market hasn't picked up as fast as many had hoped. These really need to find their killer applications (perhaps some AR application using phones and watches such as we’ve seen with Pokémon Go).

There will be an increase in Internet of Things (IoT) consumer applications (we look forward to seeing this year’s incarnation of the proverbial smart fridge) as well as cloud-based IoT offerings that provide services to consumers.

Wearable and cloud-based IoT services will also mean we’ll be seeing AI and machine-learning applications. These applications could be big enablers of new consumer services running on wearable devices as well as household voice-activated products from Amazon, Google, and other companies. For example, voice control will be a big theme at CES 2017 with new product introductions by Amazon, Google, and others. Machine intelligence will also make still and video images more useful with increasing capabilities for image recognition. Large enterprise companies with strong machine-learning capabilities will be showing how data from connected intelligent consumer devices will enable new ways to reach customers and offer them additional services.

I would also expect that there will be a greater focus on security and privacy, with the proliferation of connected consumer devices and recent reports that some of these devices have been hijacked as bots in denial-of-service attacks. Greater security and anonymity for shared content will be important safeguards to make sure that consumers feel safe with their connected devices and services.

Turning to televisions, 4K TVs now have a standard that takes full advantage of their potential, including expected HDR (high-dynamic-range images) as well as their resolution and color capabilities. Coupled with decreasing prices, these TVs should see greater pickup by both leading-edge consumers and the higher end of mainstream consumers. Many consumers are increasingly considering 4K TVs for their next replacement TV. So lower-cost 4K TVs will be a big presence at CES. In addition, UHD (ultrahigh definition) streaming services will be present, as well as Blu-ray disc UHD players that will provide content for viewing on these displays. (Almost all new content is captured in at least 4K nowadays.)

On a smaller scale, there could be more maker-oriented items as well as craft projects including microbrewing (both coffee and beer) at CES 2017, although I don't expect to see the big 3D-printer displays we saw the last few years. However, unusual 3D printing (printed pancakes, anyone?) could be sneak hits at the 2017 show.

Finally, automobile technology will continue to play a big role at CES as more and more autonomous driving functions are included in new model cars. This will also include tying consumer applications into automobiles and mobile activities.

Toward the end of the CES show, on January 8, the 2017 IEEE ICCE Conference will have a focus on Virtual and Augmented Lifestyles. The ICCE conference focuses on consumer technologies that will be the hottest thing three years from today. As a teaser of what’s to come, new for this year are tracks organized with the IEEE Biometrics and RFID Councils, the IEEE Cloud Computing initiative, and the IEEE Society for the Social Implications of Technology.

About the Author: Tom Coughlin is an IEEE Senior Member and Chair of the IEEE Consumer Electronics Society (ICCE) Future Directions Committee.

Wearing nervous expressions, a slightly disheveled, but attractive, man and woman peer through a large circular hatch, at some glowing light source outside the frame of the picture

Meet Jon Spaihts, the Writer Behind the Movie <i>Passengers</i>

Just released in time for the Holiday weekend in the United States is the science fiction movie Passengers, starring two of Hollywood’s most bankable stars, Jennifer Lawrence and Chris Pratt. The action takes place on board an interstellar colony ship, in which all the crew and colonists have been placed in hibernation for the duration of the 120-year voyage—until a mishap wakes Pratt’s character up just 30 years into the trip.

IEEE Spectrum’s Stephen Cass talked with the screenwriter of Passengers, Jon Spaihts, about his inspiration for the movie and the process of bringing his ideas to life in a big Hollywood movie. (Mild spoilers below. The conversation has been edited for concision and clarity.)

Stephen Cass: One of the things I liked about the movie, from an engineering point of view, is the way that it depicts numerous seemingly unrelated minor system failures that herald an escalation towards catastrophic failure. This kind of cascading sequence pops up in real accident reports involving complex systems [PDF]. Were you aware of this pattern and decided to build a story around it, or did you have a specific storytelling problem and found the pattern offered a solution?

John Spaihts: I needed a technical crisis that would fit the profile of the story. Meaning that it needed to make something very small to go wrong at the top of the story—leading to the awakening of our hero—but ultimately needed to swell into a full-blown crescendo that would endanger the entire ship… And I needed something that could affect systems as disparate as the hibernation system and then real ship-threatening systems like propulsion and the powerplant. That led me to think about the only real common thread those systems have, and that was computer control. The notion was that there was was kind of a mainframe computer where there was a central processor core that would just be called by systems all over the ship for processing tasks, and by terrible misfortune that core took a crippling hit, leading every auxiliary and minor processor on the ship assigned the load, with everything running way past rated capacity for years on end until things started to fail, and then of course you get a rapid cascade.

SC: The other engineering aspect I liked is that Passengers is one of the very few screenplays dealing with interstellar travel that talks about the risk of running into debris at very high speeds between the stars, although scientists worry more about tiny dust or gas particles than the asteroid field that’s shown.

JS: (Laughs) Yeah, larger than I would have made them! They are Hollywood particles… It’s something that kind of flowed naturally from the investigation of the premise: If you’re going to make a 120-year journey at half the speed of light, then that really leads you to do a lot of math about the energies involved, the propulsion problem, and I looked a little bit at relativistic math, just to see if time dilation would substantially affect their experience, which at that speed it mostly doesn’t. It’s important for navigation and communication, but not terribly important for life span. But encountering even individual gas molecules at half the speed of light imparts tremendous energies—a potato-sized nickel-iron meteorite would really ruin your whole day! So there’s always going to be some sort of plough at the front of the ship to handle that. The notion [in Passengers] was something penetrating those defenses, which was supposed to be impossible.

SC: Nowadays, a hibernation pod has become a background trope. We’ve seen them in Planet of the Apes, we’ve seen them in Aliens

JS: Yeah, like artificial gravity and force fields. They are just things that the audience accepts.

SC: Right. So in 2016 it seems risky to try to make a trope like that the central conceit of a movie. Why did you decide to go back and try to make a hibernation story fresh again?

JS: I actually think it’s a great way in to any [creative] space that has been paralyzed by cliché. The first script I sold to Warner Brothers was a movie called Shadow 19 (that hasn’t been made), which came out of exactly that kind of thought process. I was complaining to my brother about things that frustrated me about the Star Trek universe. And one of those was what felt to me like a failure of imagination about the ramifications of something as mind blowing as the transporter, or the phaser that made people disappear… The ship had ten God-like technologies that they never thought about! … I said, let’s talk about what the transporters are doing. Are they annihilating the original person and killing him, and then creating a perfect simulacrum over there? Doesn’t that raise a host of moral and philosophical issues? Are they buffering that information in some way—could they mass produce that guy at the other end? Just unpacking the trope led to a startling new story. Passengers results from unpacking the trope of the hypersleep pod that everybody just accepts in a science fiction starship. We say: wait, let’s talk about the ramifications of this and what it means for everyone. It gets very interesting. I think that’s often the best approach to cliché, to run straight at it and unpack it and make everybody look at the pieces.

SC: Another thing that modern audiences have gotten used to seeing is a “heavy industrial” look to movie spaceships.

JS: Yes, raw metal surfaces, unfinished steel!

SC: And Passengers is set on this beautiful spacecraft that looks like a high-end cruise liner with spiraling habitation rings. How much of that was in the original script?

JS: The rotating helix design was definitely the work of Guy Dyas, our production designer, who did an extraordinary job with all the spaces in the ship. The quality of the interiors was very much called out in the script, it was meant to evoke a luxury cruise ship of the future, with spaces of different character. There are a lot of modern concourses, which are the identity of the ship as it’s [designed for carrying passengers]. There are nostalgic spaces which are designed to call back to styles on Earth—you have a French restaurant, Italian restaurant, Mexican restaurant. And then the service compartments of the ship, which are for crew, are much more no-nonsense and utilitarian.

SC: Does that idea of different spaces in the movie also apply to the space suits? The suits the characters rely on don’t have a lot of features found on spacesuits today, such as maneuvering packs.

JS: Yes, those are recreational spacesuits, designed for safety and sightseeing. Somewhere else is the radiation-hardened, thruster-enabled, heavy-duty worksuit. But because heroes don’t have access to the crew spaces, they haven’t found them!

Quicker Camera Chips Coming

If you want to capture a super-slo-mo film of the nanosecond dynamics of a bullet impact, or see a football replay in fanatical detail and rich color, researchers are working on an image sensor for you. Last week at the IEEE International Electron Devices Meeting in San Francisco, two groups reported CMOS image sensors that rely on new ways of integrating pixels and memory cells to improve speed and image quality.

Both groups are working on improving global-shutter image sensors. CMOS image sensors usually use what’s called a rolling shutter. Rolling shutter cameras scan across a scene—that is, each frame of the image only shows part of the scene. This makes them speedier but it can cause distortion, especially when filming a fast-moving target like a car or a bullet. Global shutters are better for filming speeding objects because they can snap the entire scene at once. CMOS sensors aren’t naturally suited to this, because the pixels are usually read out row by row. CCD image sensors, on the other hand, have a global shutter by definition, because all the pixels are read out at once, says Rihito Kuroda, an engineer at Tohoku University in Sendai, Japan. But they’re not ideal for high speed imaging, either. Due to their high voltage operation, CCDs heat up and use a lot of power when operating at high shutter speeds.

To get beyond the row-by-row, rolling shutter operation of CMOS, chip designers assign each pixel its own memory cell or cells. That provides a global shutter but with sacrifices. In the case of ultrahigh speed imaging, the sensors are constrained by their memory capacity, says Kuroda. By focusing on the design of a custom memory bank, Kuroda’s group has developed a CMOS image sensor that can take one million frames per second for a relatively long recording time—480 microseconds at full resolution—compared to previous ultrahigh speed image sensors.

Because storage is limited, it’s not possible to take a long, high speed, high resolution video—something must be sacrificed. Either the video has to be short, capturing only part of a high speed phenomenon in great detail, or it must have lower spatial or temporal  resolution. So Kuroda’s group focused on boosting storage in the hope of improving all three constraints.

Kuroda’s group made a partial test chip with 96 x 128-pixels. The image sensor is designed to be tiled to have a million or more pixels. Each pixel in the prototype has 480 memory cells dedicated to it. So the camera can take high resolution images for 480 frames. Other sensors have captured video at a higher frames per second rate but they’ve had to do it either for a shorter period of time or with poorer spatial resolution.

The Tohoku group designed a dense analog memory bank based on vertical capacitors built inside deep trenches in the silicon chip. Because the capacitors hold a variable amount of charge, rather than a simple 0 or 1 as in DRAM, lowering the amount of current that leaks out is critical, says Kuroda. The deeper the trenches, they found, the greater the volume of each capacitor and the lower the leakage current. Increasing volume with trenches rather than by spreading out over the chip saved space and allowed for greater density of memory cells. This meant more memory cells per pixel, which allowed for longer recordings. It also freed up space to put more pixels on the chip, improving the camera’s resolution.

Some of Kuroda’s earlier CMOS image sensor chips, which used planar rather than trenched capacitors, are already on the market in ultrahigh speed cameras (HPV X and X2 models) made by Shimadzu. He says the new million frame per second sensor will further improve products like them. To push things even further, Kuroda says the next step is to stack the pixel layer on top of the memory layer. This will bring each pixel closer to its memory cells, shortening the time it takes to record each frame and potentially speeding up the sensor even more.

This sort of camera is useful for engineers who need to follow the fine details of how materials fail—for example how a carbon fiber splits—in order to make them more resilient. Physicists can use them too, for example, to study the dynamics of plasma formation.

Separately, researchers from Canon’s device technology development headquarters in Kanagawa, Japan, reported memory-related improvements for high-definition image sensors that could be used to cover sporting events or in surveillance drones. While the Tohoku group is working on ultrahigh speed, the Canon group aims to improve the image quality of high-definition global shutter cameras operating at much lower frame rates of about 30 to 120 per second.

Like the Tohoku University chip, the Canon sensor closely integrates analog memory with sensors. In the Canon chip, each pixel in the 4046 by 2496 array has its own built in charge-based memory cell. They’ve used an engineering trick to improve the image quality by effectively increasing the exposure time within each frame. Typically, the image sensor dumps its bucket of electrons into the memory cell once per frame. This transfer is called an accumulation. The Canon pixels can do as many as four accumulations per frame, emptying their charges into the associated memory cell four times. This improves the saturation and dynamic range of the images relative to previous global shutter CMOS devices operating around the same frame rates. At 30 frames per second, the sensor maintains a dynamic range of 92 dB.

This story was corrected on 19 December 2016. It is not certain Shimadzu will incorporate the current research into a product.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More