Tech Talk iconTech Talk

Researchers set up this 128-antenna array in March at the University of Bristol to carry out the first of several attempts to achieve greater spectrum efficiency.

5G Researchers Set New World Record For Spectrum Efficiency

A team of 5G researchers has set a new world record for spectrum efficiency. Their achievement with massive MIMO (multiple-input, multiple-output) arrays, which are cellular base stations comprised of dozens of antennas, is further evidence that this technology is a promising option for wireless engineers working to construct networks to deliver ultra-fast data speeds to more smartphones and tablets than ever before.

In an experiment on Wednesday, the group achieved a rate of 145.6 (bits/s)/Hz for 22 users, each modulated with 256-QAM, on a shared 20 MHz radio channel at 3.51 GHz with an 128-antenna massive MIMO array. That represents a 22-fold increase in spectrum efficiency over today’s existing 4G networks.

Read More

Advanced Research Projects from DARPA's Pentagon Demo Day

Yesterday, the Defense Advanced Research Projects Agency (DARPA) held a Demo Day for the Department of Defense in the courtyard at the center of the Pentagon to give the defense community “an up-close look at the Agency's diverse portfolio of innovative technologies and military systems at various stages of development and readiness.” In other words, prototypes of ultra-futuristic, high-risk high-reward hardware and software.

The Pentagon Courtyard was filled with displays centered around ten different theme areas: air, biology, counterterrorism, cyber, ground warfare, maritime, microsystems, seeds of surprise, space, and spectrum. That last one is probably why we were invited, and we came back with this gallery full of pictures of all of the coolest new stuff.

Read More
null

Gauntlets of Levitation, Living Desktops, and More: Video Highlights from The 2016 Human-Computer Interaction Conference

The 2016 Computer Human Interaction Conference (CHI, which is pronounced “kai” like the Greek letter) is taking place this week in San Jose, Calif. The conference is all about the ways in which the future of interaction technology is advancing, and how it will shape the ways in which we experience our environment (and each other). Really, this is just a complicated way of saying that the conference provides a great excuse for researchers to explore new and crazy ways of using computers, and some of the stuff that they’ve come up with will blow your mind. Think power tools that tell you what to do with themselves, or a couch that’s also a huge touch controller, or projection-augmented 3-D printing on your skin, or gloves that let you levitate objects: all of these are functional prototypes that researchers described at this year’s conference.

You can watch all 281 video previews here, or you can have a look at this baker’s dozen of videos that we’ve hand selected for overall future-ness, technical bewonderment, transformational potential, and generalized weirditude.

Read More
null

Facebook Revises Bot Platform to Place Messenger Users Firmly in Control

Facebook’s big announcement at the annual F8 developer conference in April was its unveiling of a bot platform that developers could use to build digital assistants that operate within Messenger. The move was meant to expand the functionality of the messaging service, now used by nearly a billion people worldwide, so that it can also deliver customized news and facilitate e-commerce.

Immediate reactions were mixed, and the announcement spurred a lot of discussion about whether users would embrace this newest experiment from one of the world’s largest tech companies. Commenters also mused about  how Messenger bots might evolve to play a role in our daily lives. That could depend as much on Facebook’s ability to seamlessly integrate new notifications and chats into the user expereince as on the ability of developers to devise clever functions for all those digital bots.

To keep the bot platform under wraps, Facebook did not conduct any external tests before its release. Now, soon after launch, user and developer feedback is quickly reshaping its future. On Tuesday, at 2016 TechCrunch Disrupt in Brooklyn, N.Y., Stan Chudnovsky, head of product for Messenger, said his team is already making some early revisions to its new bot world.

In one update, Facebook addressed concerns over bot-generated spam. The company knows that if busy bots send too many notifications to users or don’t deliver useful content, their behavior could erode the messaging service’s current status as a highly valued link between friends.

To prevent this, Chudnovsky says Facebook swapped the original “Block” button that appears in the upper right hand corner of each new bot chat to a “Manage” option that permits users to choose the type of messages they wish to receive. “Giving people more control seems to be what people want to have,” he said.

Facebook is also looking at ways to differentiate between messages that are immediately important to users (perhaps those sent by a friend or which include breaking news) and those that can be read later (the pesky bot variety). In practice, this could mean that the company designates a different ringtone or vibration pattern when users receive an urgent message, or that it simply filters certain messages from the instant stream and issues them in a group alert a few times throughout the day.

Overall, Chudnovsky is pleased with the launch and said more than 10,000 developers have begun building bots. He points out that more than 2,500 merchants on Shopify, the company’s virtual marketplace, offer bot-based customer service. “You have a bunch of early signs that the platform is starting to work,” he says.

Of course, it’s not yet clear to users or the company which bots will become most integral to users’ lives. Chudnovsky compares that uncertainty to the early days of Apple’s App Store, when many of today’s most successful apps weren’t yet obvious or even imagined.

null

Hyperloop Transportation Technologies Picks Passive Levitation for Pods

The forward motion of its subsonic, people-carrying pods will provide the necessary magnetic levitation as a side effect, says Hyperloop Transportation Technologies, a Southern Calif.-based startup, in an announcement made this morning

Dirk Ahlborn, the CEO and founder of HTT, told IEEE Spectrum that this passive leviation saves energy and trouble. “One of the biggest problems is the need for a high-powered track; this allows us to achieve leviation without having power stations all along the track,” he said.

Forward motion is important to HTT as well, which has been generating news stories with some regularity as it courts partners and investors to help realize Elon Musk’s 2013 vision of a partially evacuated, superfast tubular train: the Hyperloop. Back in March HTT said it had come to an agreement with the government of Slovakia to explore building a track in that country. Later this week, the company is to stage a media event in Las Vegas.

Read More
Queens Univesity's Holoflex bendable 3-D smartphone.

HoloFlex: A Flexible Smartphone with a Holographic Display

Based on how things have been going for the last few years, it seems like we're getting awfully close to hitting peak smartphone, where new phones have run out of ways to differentiate themselves besides being slightly faster or slimmer or brighter or whatever. We’ve almost forgotten what it’s like to see brand new phones with a fundamentally new and exciting technologies inside of them. Oh, you've got a fingerprint sensor that's under the screen now instead of under the bezel? Hooray.

Fortunately for tech lovers everywhere, the Queen’s University Human Media Lab in Canada is keeping busy reminding us that smartphone hardware really does have a future that's worth getting excited about. Their latest prototype is the HoloFlex, which runs Android Lollipop, includes a full HD screen, and is powered by a 1.5 GHz Qualcomm Snapdragon 810 processor with a dedicated GPU and 2GB of RAM.

And did we mention that you can bend it, and that it has a holographic lightfield display that can project glasses-free 3-D images to multiple users simultaneously? Because it’s got that stuff going for it, too.

Read More
null

Bell Labs Looks at Claude Shannon’s Legacy and the Future of Information Age

As the doors of Hamming Hall swung open, a parade of tech-world notables headed inside. Internet pioneers and Silicon Valley executives crammed into the front rows, along with 16 IEEE Fellows, of which four were also IEEE Medal of Honor recipients. A Nobel laureate walked in and squeezed himself into a seat between them.

Those brilliant minds gathered at Bell Labs in Murray Hill, N.J., last Thursday to celebrate a man who also stood out for his extraordinary genius: Claude Shannon, the mathematician and electrical engineer who founded the field of information theory and whose groundbreaking ideas on computing, cryptography, and communications have had a lasting impact on the world of technology. Shannon, who worked at Bell Labs from 1941 until 1956, when he joined the MIT faculty, died in 2001 at age 84. This year marks the 100th anniversary of his birth.

Read More
null

IBM Puts a Quantum Processor in the Cloud

IBM announced today that it is making one of its superconducting quantum processors accessible over the Internet. Those itching to try out such hardware will be able to get hands-on experience through a new quantum computing platform—at least, the experience will be as hands-on as it can be with hardware sealed inside a remote dilution refrigerator and cooled to a fraction of a degree above absolute zero. 

With just five qubits, the chip won’t let you rapidly factor large numbers in order to break encryption. In fact, a classical simulation of this system takes less time to run, says Jay Gambetta, manager of the Theory of Quantum Computing and Information Group at IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. 

But the goal of this tool, says Gambetta, “is to get people to start thinking quantum, to start thinking in terms of how a quantum computer works. Most people think quantum is hard or it’s spooky or it’s different. And yes it’s different, but it’s actually not hard.”

The portal, dubbed the IBM Quantum Experience, includes tutorials and a visual programming interface. Users can learn about algorithms and perform real quantum operations on the five-qubit chip, which is housed at the Watson facility. The aim is to have the chip be available 24-7, although it will go offline intermittently so that automated calibrations can be performed. 

The chip is part of a family of superconducting quantum processors created at IBM that have their qubits arranged in lattices instead of in a line. Gambetta says this arrangement offers better connectivity between qubits for error correction, a key ingredient in the effort to scale quantum systems up to create a “universal quantum computer” that can solve certain problems far faster than conventional, classical computers.

Access to the chip will be free but not frictionless. “We don’t want a billion bots launching things and stopping real people from wanting to use it,” Gambetta says. He says that the group is developing a system that can assign coins to different types of users to grant access. 

This is not the first time that a team has offered public access to quantum computing hardware. In 2013, a group at the University of Bristol launched a site that lets users practice programming and connect to a small optical system with two qubits. The ability to command that real-world hardware is currently offline, but the Bristol team says it aims to relaunch soon with a new chip and an upgraded user interface.

An astronaut floats inside a space station with a sophisticated camera.

Review: A Beautiful Planet Takes Full Advantage of IMAX’s Switch to Digital

“There’s no way you can match what you see with your own eyes—but this is close,” says Barry “Butch” Wilmore. As part of the crew of the International Space Station, Wilmore was one of the astronauts-turned-cinematographers who captured footage for the latest IMAX documentary, A Beautiful Planet, which opens today, 29 April.

A Beautiful Planet takes full advantage of the vantage point of the ISS and its multi-windowed viewing cupola. The mesmerizing glowing veils of the aurorae, the night-time splatter of cities across the continents, and swirling storm systems fill the giant IMAX screen in 3-D, leaving a lasting impression of wonder.

The movie also includes many scenes of life on board the ISS, giving some of the best views to date about what a working day is really like in a cluttered collection of floating boxes. The moviemakers’ goal in juxtaposing the interior shots with those aimed outward toward the blue ball we call home is to reinforce A Beautiful Planet’s environmental theme. The point: Just as considerable effort is required to keep a space station habitable, effort also needs to be put into limiting ecological damage on spaceship Earth so as to keep it habitable too.

Of course, A Beautiful Planet is not first IMAX movie shot in space. Indeed, Toni Myers, the movie’s director and producer also helmed 2002’s Space Station 3D and 2010’s Hubble 3D. But what’s different is that A Beautiful Planet was shot entirely using digital cameras, dramatically changing the nature of the filmmaking process.

Myers explained that the earlier IMAX cameras and their film packs were so bulky that, “You got seven minutes of footage on a mission, in 30 second takes. And every take was the first and only take.” In contrast, the digital cameras allowed as many takes as were needed; footage was stored on small external hard drives sent back to Earth with returning astronauts or Dragon cargo ships.  In addition, relatively low resolution versions could be directly downlinked to the producers on the ground. This allowed collaboration and consultation about how a shot might be reframed or exposure adjusted, or discussion on suggestions for previously unplanned shots.

However, the moviemaking secret of A Beautiful Planet is that the astronaut filmmakers actually took very little video of the Earth. It was realized that the digital video cameras currently available don’t have the resolution needed for the enormous IMAX screen. Instead, says James Neihouse, the film’s director of photography, still digital cameras such as the Canon EOS C500 EF were used. These captured 4K images at four frames per second in daylight, and two frames per second in the Earth’s shadow. About 250,000 still frames—totaling about 11 terabytes of raw data—were captured in this way. Back on Earth, software was used to interpolate additional in-between frames to create video files with smooth motion. The result was roughly 18 terabytes of footage that had to be edited down for the finished movie.

Opting for digital cameras over film also allowed images to be captured with very high dynamic ranges, so that dimmer features like stars or the aurorae were not washed out by bright objects in the foreground. The result is spectacular scenes of the sort that would have been previously impossible to see on a screen without resorting to post-production editing wizardry like compositing separately shot footage together or using computer generated graphics.

The astronauts who captured the footage worked in their spare time outside their official scientific and engineering jobs. Neihouse led the team that got the astronauts up to speed on handling the cameras. He notes that the training involved breaking the astronauts of some of the videography habits that NASA had trained into them, such as ensuring that everyone in a shot was oriented with their feet pointing the right direction. This “1-g mode” is good for a press conference beamed from space, but doesn’t convey the true topsy-turvy nature of life onboard the station. Neihouse laughs that he’s “the only [director of photography] in the world who has to train their first unit how to shoot.” 

null

Movidius Puts Neural Network on a USB Stick

Neural networks are artificial intelligence systems that excel at interpreting images. This makes them promising for helping drones and robots navigate, or for analyzing surveillance footage. But they are typically power hungry, which has limited their use so far. Vision processor company Movidius of San Mateo, Calif., hopes to change that with a low-power chip designed to run neural networks. The neural net accelerator, called Fathom, comes on a USB stick, uses only 1 watt of power, and can run most visual neural nets.

Making a low-power system that can run computationally intensive neural networks is a challenge. Neural networks make sense of images in a way that’s analogous to the human brain. They’re built during a training period that uses millions of labeled images to train an algorithm to recognize human faces, dogs, or trees, for example. These systems analyze images in several steps or layers, first finding objects, then identifying them as, say, trees or people, then identifying a known person, then figuring out, say, if that person is angry or distressed. This typically requires a lot of power-draining data transfers.

The Fathom, which holds Movidius’s Myriad 2 chip, does things differently. The Myriad 2 uses twelve parallel processors, each with a dedicated memory bank. “We have a mindset of not pushing the clock speed,” says Brick. Instead of running one or a handful of processors harder to do calculations faster, they chose to use twelve running in parallel. The chip also saves power by minimizing data transfers. “We keep the data really close to where it’s being processed—sometimes moving the data can use more power than processing the data,” says Brick. This strategy is similar to those used by research groups designing mobile processors of this type—but it will be the first of its kind to market, says Brick.

The accelerator can run neural networks like GoogLeNet at 1 watt. NVIDIA’s TX1 runs on a minimum of 4 watts, by comparison, and draws 1 W even when it’s idling. The Myriad 2 idles at 0.12 W. What’s more, the Fathom doesn’t require a heat sink or any other cooling systems, which is part of why it’s small enough to fit on a USB.

Cormac Brick, head of machine learning at Movidius, says this mobile-friendly system should make it practical to run neural networks in more places. Brick says a Fathom stick loaded with the right neural net could help individual surveillance cameras flag problems—alerting a home user that his father has fallen and can’t get up or alerting airport security staff that someone is acting aggressively—something it usually takes a human watching the footage to notice.

Other companies like Nervana Systems want to put deep learning in the cloud. For people concerned about privacy, running neural nets in a mobile system will be preferable to uploading video footage of their home into the cloud for analysis, says Brick. The speed of on-board neural nets will also help robots and drones navigate. The Fathom system could help them more quickly respond when an obstacle moves into their path, without any of the inescapable latency resulting from data being sent back and forth.

In a Movidius press release, Yann LeCun, director of AI research at Facebook, praised the device, saying, “every robot, big and small, can now have state-of-the-art vision capabilities.”

The Fathom is not aimed at the consumer who wants a less clumsy drone or a smarter home security system, but at the at companies and researchers developing such products. Users will have to know something about building embedded systems. Brick says it’s compatible with Raspberry Pi, drone kits with open API, as well as kits for security cameras and robots. It can run neural nets based on open-source software libraries TensorFlow and Caffe. Users developing new neural nets can also use the Fathom with their personal computers to more quickly test their prototypes.

Movidius will send about 1000 of the Fathom sticks to researchers at universities and companies, and encourages university teams who want to try it out to get in touch through their website. After distributing the neural-net-on-a-stick on a case-by-case basis for a while, they’ll launch the product more broadly. The company says the USB stick will sell for under $100 at launch.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More