Tech Talk iconTech Talk

An illustration of a small two-camera rover on the moon

5 Teams Move Forward in Google Lunar XPrize Moon Race

Five teams have advanced to the last stage of the Google Lunar XPrize, prize administrators announced today. This year, the groups will race to ready their spacecraft for missions to land on the moon, move at least 500 meters across the surface, and transmit images and high-definition video back to Earth. The first to do so will claim the top prize: US $20 million.

Read More
A close-up of a chip-sized terahertz modulator shows the path a wave travels through it.

Tiny Terahertz Modulator Packs a Big Data Punch

Researchers covet terahertz waves for their ability to deliver data wirelessly at rates as high as 100 gigabits per second. That’s an unbelievably fast rate to achieve over the air, especially when you consider that: a) the average U.S. broadband speed is 55 megabits per second; and b) broadband service is piped into homes on fiber optic cables. For decades, academics have tried to develop terahertz-capable components to bring these exceedingly fast speeds to all of us.     

Last week, researchers at Tufts University showed off a new terahertz modulator that is the first such device to fit on a chip. This modulator, like any modulator found in any wireless device, can alter the amplitude of a wave during its transmission to encode data. But this new modulator can do so at the impossibly fast speeds required for waves found in the terahertz band.  

The Tufts group, led by Sameer Sonkusale, a computer engineering professor, note that larger terahertz modulators have been shown to work in open space, with a mechanical chopper that simply blocks a terahertz wave or permits it to pass by to indicate an “off” or “on” status, But device manufacturers would need a miniaturized, on-chip version to bring ultra-fast terahertz service to future smartphones and tablets.

On the electromagnetic spectrum, terahertz waves fall between the microwaves used by electronic devices and the optical waves that comprise visible light. Their frequencies are far above those of the signals used for cellular phones (which in the U.S. include 800 megahertz and 1.9 gigahertz), and for WiFi (which operates at 2.4 GHz). These lower, everyday frequencies are defined as “ultra high frequency” by the International Telecommunication Union. Lately, carriers have also been eyeing “extremely high frequency” millimeter waves, which fall roughly between 30 and 300 GHz, for their potential to bring data rates approaching 1 Gb/s to future 5G networks.

Research into the terahertz band, whose waves are classified as infrared, aims to move us even further along the electromagnetic spectrum. Terahertz waves are broadcast at a range of frequencies between 300 gigahertz and 3 THz. For their first demo, the Tufts researchers broadcast at 0.22 THz (220 GHz) to 0.325 THz (325 GHz), but say the modulator is capable of operating at frequencies up to 1 THz.

The new modulator, which is only 100 micrometers long, consists of two gold wires that serve as a waveguide. A bed of two-dimensional electron gas rests beneath the wires to dampen the wave as it travels by. By altering the number of electrons in this bed, the researchers can control the degree to which the terahertz waves (generated by a pair of lasers and a photon mixer) are absorbed by the electron gas. By absorbing more or less, they can modulate the waves as they travel across the wires.  

With this setup, the Tufts team achieved minimum data rates of 28 Gb/s in the terahertz band. At that rate, Sonkusale says a user could download a thousand high-definition videos in a fraction of a second. They reached this rate using on-off keying, one of the simplest types of signal modification. As the name implies, it communicates data through the presence or absence of a signal. More sophisticated modulation techniques could deliver even higher rates.

The group also reported an intensity modulation of 96 percent. This means the modulator could create two states (“on” and “off,” for example) with a separation of up to 96 percent to distinguish between them. A higher modulation index typically leads to fewer errors during transmission, because it makes it easier for a detector to tell the difference between signals.

“The modulation index is so high that it literally behaves as a terahertz switch,” Sonkusale says. “You can turn it on—and you get 96 percent transmission and you turn it off—and it completely blocks transmission.”

Of course, a modulator is only one component in a communications system—to build a new cell network that could handle terahertz waves, researchers would also need to build terahertz-capable antennas and receivers, as well as powerful sources to generate the waves in the first place.

One problem with terahertz waves: They are very easily absorbed by many materials as they propagate through the air, which makes it difficult to rely on them for long-distance communication. “Almost everything absorbs the terahertz wave,” says Sonkusale.

For that reason, Sonkusale thinks terahertz waves will first be used to send messages between devices within fairly short range of each other. Such a setup would be similar to Bluetooth, but with much faster data rates. “Imagine your Blu-Ray is connected to your TV without a wire,” he says.  

Sebastian Thrun, MOOC pioneer

Can MOOCs Cure the Tuition Epidemic?

This is part of a series on MOOC and online learning

In the United States, the biggest obstacle standing in the way of students going to college is not their high-school grades or their test scores. It’s cash. For low-income applicants especially, soaring tuition is one of the main reasons that as many as 40 percent of those accepted in the spring don’t show up in the fall. According to the College Board, for the 2016-17 school year, average tuition and fees are about US $33,500 at private colleges, $9,650 for state residents at public colleges, and $25,000 for out-of-state residents. To these charges, add about $10,000 a year for room and board. In general, tuition has been rising far faster than the cost of living. Going to college can be like buying a new Mercedes every year.

How did this affliction invade our universities? While there are many causes, two factors account for much of it. For public schools, the answer is simple: State legislatures have savaged higher education over the last decade. Previously, up to two-thirds of a state college’s budget came from the state; now only about half of the budget does. Students and their families pick up much of the difference.

The bloating of academic bureaucracy is another cause. While the number of new students and faculty hasn’t grown much, non-academic staff at colleges and universities has ballooned—more than doubling in a generation. What’s more, compensation for administrators is often far higher than faculty pay.

One painful result of the tuition epidemic is that U.S. student debt is now at an all-time peak of more than a trillion dollars. According to the Project on Student Debt, the average college student graduating in 2015 carried debt of about $30,000 [PDF]; in 2004, the average was a more manageable $18,550.

Under stress, most schools have opted for cost-cutting. Perhaps most troubling, they have replaced full-time tenured faculty with cheaper adjuncts. According to a 2013 report, full-time tenured and tenure-track positions at U.S. universities fell from 78 percent of teaching staff in 1969 to just 33 percent in 2009. Over the same period, the share of non-tenured positions in the teaching staff more than tripled. On average, adjuncts earn just a third of what tenured faculty make and have no job security and few benefits; in a sense, they were the bellwethers of the gig economy.

Despite these and other cuts, tuitions continued to climb. So when the first MOOCs—massive open online courses—appeared on the scene, they seemed like the answer to a prayer. As described in my previous post, MOOCs made it possible to reach many thousands of students at a time, so long as they had a computer and a robust Internet connection capable of video streaming. Sebastian Thrun, a MOOC pioneer, famously suggested that the technology would so disrupt education that in 50 years, only 10 institutions of higher learning would remain.

Read More
System captures Mach cone from laser pulse

Superfast Camera Sees Shock Wave From Light

A camera system that captures a snapshot of overlapping light waves in a tiny fraction of a second could lead to new methods for imaging, allowing scientists to watch the brain’s neurons interacting or see neutrinos colliding with matter.

Read More
An illustration of a key lock surrounded by electrical cords to represent net neutrality.

Is Net Neutrality Good or Bad for Innovation?

Update: On 18 May 2017, the Federal Communications Commission voted in favor of a proposal by Chairman Ajit Pai to review existing rules around net neutrality. Specifically, the proposal would eliminate Title II of the Communications Act, which classifies Internet service providers as common carriers. By classifying ISPs as common carriers, Title II subjects ISPs to rules such as those that prohibit carriers from speeding up or slowing down specific content on their networks. The 2-1 vote begins the process required to change that classification, and also triggers a 90-day review that includes a public comment period. Later this year, commissioners will cast their final votes. The proposal's ultimate goal is to relax regulations that Pai says have prohibited ISPs from charging extra for delivering content that requires more bandwidth. It was largely based on an economic argument that such restrictions have hindered investment across the telecom industry. Opponents have said that removing regulations would harm content producers such as Netflix and Google, and small businesses. 

The Trump administration is just days away from taking the reins in Washington, D.C. and many technologists are wondering what its arrival will mean for net neutrality. The new leadership is flanked by advisors who have opposed net neutrality in the past, suggesting that U.S. policy could be challenged or even reversed in the year ahead.

The Obama administration made the strong defense of net neutrality a hallmark of its Internet policy, based on the belief that preserving it benefits consumers and promotes innovation. Now, Trump’s administration is likely to argue the opposite, and say that tossing out net neutrality is really what’s best for consumers, companies, and the U.S. economy.

Major tech companies have also lined up on opposing sides of the debate. Google, Apple, Amazon, and Netflix support net neutrality, while Internet service providers (ISPs) including Comcast, Verizon, and AT&T have long railed against it. Everyone argues that their position will spur innovation and economic growth. So who’s right?

Unfortunately, there is no clear answer. Economists have done plenty of modeling on net neutrality over the past eight years, but there isn’t a strong consensus about whether keeping it or throwing it out would be best for consumers, innovation, or the economy. “Nobody has much data,” admits Gerald Faulhaber, an economist and professor emeritus at the University of Pennsylvania who has done his own review on the matter.

To better understand the most compelling arguments about how changing U.S. policy on net neutrality might boost innovation, or kill it, IEEE Spectrum spoke with Nicholas Economides, an economist at New York University who supports net neutrality, and Faulhaber, who generally opposes it.

Let’s start by defining our terms. As you may recall, net neutrality is a regrettably bland term for an important idea: ISPs such as Verizon and Comcast ought to deliver all online content to consumers in the same way, without granting preferential treatment to any particular content.

While it’s worth noting that this concept can be interpreted in many ways, it often means prohibiting ISPs from asking content producers, including Netflix and Google, to pay a higher rate so that their content is delivered at faster speeds.

From the perspective of ISPs, not all content is created equal—if millions of Comcast customers want to live stream the Super Bowl from NBC, that’s going to require a lot more bandwidth and network resources from Comcast than if they were all trying to load a simple website. ISPs argue that allowing them to charge content creators (such as NBC) based on how much bandwidth they consume, or how fast their content must be delivered, is more fair.

Clearly, this policy would also generate extra revenue for ISPs, which they say they will reinvest back into their own networks. This argument is tempting because right now, many ISPs have a near-monopoly in their service areas with little incentive to improve their service (a situation that has led to notoriously low customer satisfaction). If they could charge clients more to move content, ISPs may be more motivated to develop faster service.   

Skeptics say that ISPs won’t spend very much money on achieving faster speeds when they could just slow down service for any content producer who doesn’t fork over enough cash. “What matters to consumers is relative speed,” says Economides. “Being able to delay things creates artificial scarcity, for which content providers are willing to pay.”

Economides also says investors will want any profits returned to them, rather than ploughed into infrastructure. “As soon as AT&T makes money from violating net neutrality, there will be tremendous pressure to give that money to the shareholders,” he says.

Meanwhile, net neutrality supporters say the real innovation at stake is that which comes from content producers such as Netflix, Google, and Facebook. If content producers have to pay ISPs in order for their content to be delivered quickly to customers, that money is not being invested back into those companies.

There’s also a lot of concern for the little guys—small content producers who may not be able to afford the rates ISPs will charge for faster delivery. Critics fear that booting net neutrality will create a “two-tiered” Internet wherein wealthy companies pay for content to be delivered at high speeds, while content from startups or small web publishers languishes in the slow lane.

“The greatest threat to innovation is if new companies, innovative companies, have to pay a lot to be on the same playing field as everybody else,” Economides says. Net neutrality supporters worry there might be secondary effects from limiting the free flow of ideas and information online.

In a sense, though, we already have a two-tiered Internet. Wealthy companies can pay for web designers and software engineers to program sites that load faster and respond more quickly to commands. And a handful of publishers already own most of the sites that we spend the majority of our online time perusing.

Faulhaber doesn’t think eliminating net neutrality would deal a devastating blow to small content producers. He compares tiers of Internet speeds to the various delivery services offered by a post office. One company can pay a little extra for expedited service while others who don’t need rushed delivery simply pay the standard rate.

“We've heard this about how only large firms will be able to do it, but if you look at the way the economy works—that’s not true at all,” he says. “The only people who will buy it are those that need it, and if you’re a small firm and you need it, you’ll buy it.”

In fact, Faulhaber, thinks new services could pop up—such as in robotic surgery or gaming—if content producers were guaranteed access to a high speed lane. “I don’t see this as removing opportunities, I see this as creating opportunities for innovation,” he says.

Right now, there are still a lot of unknowns about how a post–net neutrality world would actually operate. It’s not clear how much faster or slower content might be delivered, or what fees an ISP would charge for each service. Given those gaps in knowledge, it’s hard to know how these expenses might impact small or large businesses, and affect ISPs or content producers.

And if net neutrality disappears, what might that mean for consumers? Well, if ISPs could charge content producers more to cover the expense of maintaining their network, they may charge consumers less for home service. Of course, content producers could also wind up passing along the cost of the extra fees they must now pay to consumers, zeroing out any cost savings from ISPs.

While a drop in prices for consumers is theoretically possible, no one knows if it would actually happen in a post–net neutrality world. “There’s no evidence that if net neutrality were violated today, the price would fall,” Economides says. “There could be some theoretical model in which case it would happen, but there are also other models where it doesn’t happen, so there is no bottom line on that.”

Consumers may also benefit from “zero rating,” a perk that some ISPs have already begun to offer. With zero rating, an ISP forgives any data that a customer spends on specific content. For example, AT&T said in September that it would not count programs streamed through its own DirecTV Now service against customers’ monthly data limit, much to the FCC’s chagrin.

Clearly, net neutrality is a complicated issue with many factors at play that can be tweaked to produce both positive and negative scenarios. To try to sum up all of these variables and their potential outcomes, Economides did an analysis in 2015 to search for the “total surplus” in the market for Internet service. Total surplus is an economic term that identifies the conditions that will bring the greatest benefit to both producers and consumers.

In Economides’ case, he tried to figure out whether keeping or ditching net neutrality would bring us closer to total surplus, by factoring in the profits of ISPs, the profits of content producers, and the benefits to consumers. In the end, he found that sticking to net neutrality “tends to maximize total surplus” for society.

With that in mind, he also thinks preserving net neutrality would promote innovation and economic development across the most businesses. “If you’re a Republican and you believe you should support business, then you should be pro–net neutrality because most businesses benefit from net neutrality,” he says. “The only businesses that benefit from violations of net neutrality are ISPs. If you think about it, 95 percent of businesses in the U.S. are not ISPs.”

Faulhaber, of course, sees any impact from doing away with net neutrality as simply another reasonable cost of doing business. “Just like we see in other businesses that offer different qualities of service, you get to choose what you want, and that’s fine,” he says. “This is standard in the economy.”

So what will actually happen with the new administration? For the past few years, net neutrality was preserved by the U.S. Federal Communications Commission under Obama’s watch. Now, though, Trump could change all that. His transition team includes several opponents of net neutrality, including Vice President–elect Mike Pence and Rep. Marsha Blackburn. And the trio he selected to coordinate the handoff of the FCC to his new administration are all against it.

In the coming weeks, Trump will also have the opportunity to appoint three new commissioners to lead the FCC, including a Chair. His new appointees will join two current Republican members—Ajit Pai and Michael O’Reilly—who have both criticized net neutrality. The FCC is an independent federal agency, which means it’s overseen by Congress, which is also now under Republican control.

That said, anyone who tries to reverse U.S. policy on net neutrality will likely have a difficult road ahead. Public sentiment will not be on their side—when the FCC solicited public comments on the issue in 2014, they received a record 3.7 million comments, with the vast majority in favor of net neutrality.

Discarding it, Economides says, may not be as simple as net neutrality opponents would like to believe. “I’m not saying it’s not going to happen; I’m just saying it’s not going to be easy,” he says.

Intel Project Alloy

CES 2017: Intel's Project Alloy Is the VR System I Want in My Living Room

Over the last year, we’ve seen a transition from the peak of virtual reality (VR) hype into the much more tempered—and consequently less impressive—reality of commercial VR systems. The experience is still good, or as good as can be expected, but VR in general still suffers from a variety of minor annoyances, including a fat cable that connects your headset to your computer, external tracking systems that have to be set up, and the fact that you’re immersed in an artificial reality means that everything around you in real life is suddenly a collision hazard.

Intel’s Project Alloy is a virtual reality headset that solves all of these problems. It’s completely tetherless—all of the head and body tracking is contained inside the headset itself. It can sense objects in your environment and convert them into virtual representations of themselves, allowing you to wander around your living room without permanent damage to your shins.

Read More

Laser Weapons Will Turn Earth's Atmosphere Into Lenses, Deflector Shields

Earth's atmosphere is a constant annoyance for anyone trying to do anything useful with light. Even if you discount things like clouds, smog, and smoke, there are layers and pockets of air of varying temperatures that routinely make things go all wobbly. This is why most halfway decent telescopes are built on the tops of mountains and all the best telescopes are out in space.

Things get even more difficult when you're trying to push a lot of light through the atmosphere with the goal of having it all end up exactly where you want it, as is the case with a directed energy weapon. Adaptive optics have been able to help somewhat, but wouldn't it be better if the atmosphere could actually do something useful? You know, for once?

BAE Systems has been working on a way to use lasers to actively reshape the atmosphere to turn it into a variety of optical tools. The Laser Developed Atmospheric Lens system (LDAL) uses powerful laser pulses to make air itself into lenses, mirrors, and even protective deflector shields.

Read More
An illustration of a brain linked to multiple computer screens on desktops where people sit is evocative of a MOOC.

How the Pioneers of the MOOC Got It Wrong

This is part of a series on MOOC and online learning

In 2011, when Stanford computer scientists Sebastian Thrun and Peter Norvig came up with the bright idea of streaming their robotics lectures over the Internet, they knew it was an inventive departure from the usual college course. For hundreds of years, professors had lectured to groups of no more than a few hundred students. But MOOCs—massive open online courses—made it possible to reach many thousands at once. Through the extraordinary reach of the Internet, learners could log on to lectures streamed to wherever they happened to be. To date, about 58 million people have signed up for a MOOC.

Familiar with the technical elements required for a MOOC—video streaming, IT infrastructure, the Internet—MOOC developers put code together to send their lectures into cyberspace. When more than 160,000 enrolled in Thrun and Norvig’s introduction to artificial intelligence MOOC, the professors thought they held a tiger by the tail. Not long after, Thrun cofounded Udacity to commercialize MOOCs. He predicted that in 50 years, streaming lectures would so subvert face-to-face education that only 10 higher-education institutions would remain. Our quaint campuses would become obsolete, replaced by star faculty streaming lectures on computer screens all over the world. Thrun and other MOOC evangelists imagined they had inspired a revolution, overthrowing a thousand years of classroom teaching.

These MOOC pioneers were therefore stunned when their online courses didn’t perform anything like they had expected. At first, the average completion rate for MOOCs was less than 7 percent. Completion rates have since gone up a bit, to a median of about 12.6 percent, although there’s considerable variation from course to course. While a number of factors contribute to the completion rate, my own observation is that students who have to pay a fee to enroll tend to be more committed to finishing the course. 

Read More
schematic of new BICSEL laser

Supremely Small BICSEL Laser Traps Light in Open Air

 Tapping into an idea from quantum mechanics that dates back to the Jazz Age, researchers have created a new type of laser that could be much tinier than conventional lasers, potentially leading to faster optical communications and more powerful computers.

The laser relies on a phenomenon known as bound states in the continuum (BICs), which allows researchers to build a laser cavity in open air. “It’s not every day that you have the possibility to make a new type of laser,” says Boubacar Kante, a professor of electrical and computer engineering at the University of California, San Diego, who with his colleagues described their laser in this week’s issue of Nature.

Read More
An editor wears a set of VR googles while sitting in front of an ominous large metal box

CES 2017: AxonVR Brings Touch and Temperature to VR

This is my tenth year at CES. Every year, I spend much of the show wondering why I put myself through it. And then, maybe once per show, I get reminded of why I’m so lucky to be here doing what I do.

Last night, AxonVR reminded me that technology can be absolutely magical when a tiny virtual deer that took a warm and fluffy nap on my outstretched palm.

We use the term virtual reality to refer to headsets like the Oculus Rift and the HTC Vive: they make it look, and sound, like you’re in a virtual world. As much of an achievment as it is to fool our sight and hearing, we have other senses that we use to experience reality. The vast majority of VR systems completely ignore these senses, like touch, because they’re even more challenging to replicate: I mean, how the heck could you possibly create an arbitrary touch sensation instantly on demand? I have no idea. But AxonVR does.

This monster is AxonVR’s HaptX system. It’s an enormous metal box with some buttons and blinking lights, and it occasionally makes ominous noises. In case you can’t tell from the way it looks, it’s very much a prototype. You put on a HTC Vive headset, and then stick your left hand into that black protruding rectangle at the front, palm up.

Both IEEE Spectrum editor Amy Nordrum and I tried this thing out, and we really had no idea what to expect: AxonVR wouldn’t tell us exactly what was going to happen beforehand, almost certainly so they could watch us totally freak out, because we totally freaked out. Or at least, I did; Amy was much more professional about the whole thing.

Here’s how she describes the experience:

I poke an apple and bring it over to drop it on my left hand. Immediately, I feel the familiar shape of an apple in my hand. It's round and smooth on every side, just like it looks. Neat.

Next, I select a nondescript cube and place it in my palm. I feel its slight weight. Then, it starts to change colors. First it’s red, and my hand feels warmer. Then it’s blue and my hand is cooler. The change in color and the change in sensation happens simultaneously. I wasn't expecting to feel a temperature gradient through this haptic VR system, and it’s a pleasant surprise.

I see a grenade on the table and drop it on my hand. The grenade erupts and spews flames down, engulfing my palm and fingers. In addition to feeling the heat, I feel that heat come in waves as the flames spread over my hand.

The obvious next choice is a large piece of ice waiting helpfully on the table. It settles in my hand with a soothing chill. I move it across my fingers and over my palm, and the sensation follows with no detectable latency.

Then, I select an object I’ve been avoiding— a spider, with its angled legs and erratic movements. The lightness of the spider’s legs, spread over the eight points of its wide stance, immediately transfers to my palm. Then, the spider starts to move. It picks up its front two legs and I feel a difference in sensation, with perhaps slightly more weight at the other six points. The spider shuffles around and I feel its many legs, like eight little pencil points pressing lightly on my palm.

Lastly, I select a miniature deer, beige with a large rack of black antlers, that has been resting off to the side since I began the demo. I drop the deer into my hand and feel its hooves plant firmly in my palm. The sensation is not at all like the spider, and it shouldn’t be, with more heft behind each hoof. The deer fidgets and I feel its weight shift in my palm as it turns, and the stroke of a hoof when it starts to paw at my hand.

Then, the deer starts to lay down. Its legs fold up beneath it and its furry belly rests in the center of my palm. I can feel it breathe, and the spot on my hand where it’s resting starts to warm. The experience of holding a tiny deer in my hand and feeling its hooves is a neat trick, but the ability to actually feel it breathe and sense the warmth of its furry body brings on an entirely different dimension of emotion. It feels like my pet, and I want to keep it.

The demo ends with a grand finale whereby a flying dragon alights on the top of the silver box in front of me and blows a mouthful of flames onto my hand with spectacular flair. It is a good dragon, and the flames feel sufficiently fiery. Still, it is that tiny deer that leaves the fondest impression, and leave me feeling blown away at the sensitivity and fidelity of the AxonVR haptic technology.

Somehow, this  mystery box is able to replicate tactile and thermal sensations, with force feedback. It did it all at once, and very compellingly. I spent most of the demo giggling like a little kid. The resolution of the tactile sensations was spectacular, from the edges of a cube, to the roundness of an apple, to the tiny prancing hooves of the deer. And when the deer tucked its little hooves underneath itself laid down, I could feel how warm and soft its belly was. The warmth was immediate, just like you’d expect, and it was the same with the dragon blowing fire, or the ice cube sliding around my palm: there wasn’t any noticeable latency, and the thermal sensations were powerful enough to be completely convincing. Ice felt like ice feels, and fire feels like you’d imagine fire feeling if you could hold it without actually burning yourself.

Saying that AxonVR has come up with a convincing simulation would be underselling what the HaptX experience is like. Other haptic systems we’ve tried require some (or occasionally lots of) imagination to transform a generalized touch sensation into something that matches what you’re seeing inside of your VR headset. With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary: or rather, I can’t imagine how anything short of a full-on Star Trek holodeck would be able to make a tiny deer that feels any more real than the one that napped on my palm last night.

We’re as curious as you probably are about what, exactly, is inside that enormous box. Personally, I think it’s suspicious that the box is just large enough to contain an apple, an ice cube, a very large spider, a very small deer, and a dragon, because that would explain how it works as well as it does. AxonVR mentioned something about “an array of microfluidic actuators,” but we’ve been promised more details once things calm down a bit after CES.

While what we tried out last night is certainly futuristic all by itself, AxonVR’s website promises even more, including an entire haptic exoskeleton that allows you to feel force over your entire body. This video from last April shows a bonkers concept image of what their final system might look like, along with a general description of the tech and some behind the scenes on the hardware itself:

AxonVR closed a $5.8 million seed round of funding last month, which is a lot, although not a surprising amount assuming they let prospective investors spend five minutes with a hand in that magic box. The company is looking at several different potential markets, including design and manufacturing, training and simulation, and location-based entertainment. That last one is where you’re most likely to be able to experience this for yourself: AxonVR’s press materials suggest that the company is not imagining you would have their system in your home, but more likely, you’ll find it at arcades or theme parks. There’s no timeline for any of this quite yet, but AxonVR is working on partnerships, and we’re expecting additional announcements later this year.

We can’t wait.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More