Google Translate has become a quick-and-dirty translation solution for millions of people worldwide since it debuted a decade ago. But Google’s engineers have been quietly tweaking their machine translation service’s algorithms behind the scenes. They recently delivered a huge Google Translate upgrade that harnesses the popular artificial intelligence technique known as deep learning.
The deep-learning software driving the modern artificial intelligence revolution has mostly run on fairly standard computer hardware. Some tech giants such as Google and Intel have focused some of their considerable resources on creating more specialized computer chips designed for deep learning. But IBM has taken a more unusual approach: It is testing its brain-inspired TrueNorth computer chip as a hardware platform for deep learning.
The telescope, which had its official launch on Sunday, has already received astrophysical signals, China’s press agency, Xinhua, reports. The almost 1.2-billion-yuan (US $180 million) project was spearheaded by the Chinese Academy of Sciences.
Like the 305-meter-wide dish of the Arecibo Observatory in Puerto Rico, FAST consists of a spherical reflector dish that collects radio signals and focuses them onto the receiver system suspended above it. But FAST, which was built in a natural hollow in southern Guizhou province, also boasts an active reflector surface: Triangular panels that make up its dish can be moved to form a smaller, transient reflector, in order to focus and target different locations on the sky.
According to the FAST site, the telescope will have double the raw sensitivity of the Arecibo Observatory. Among other things, it is expected to be able to hunt for the universe’s first stars, search for signals from an extraterrestrial intelligence, and enable the detection of new pulsars—the spinning remnants of dead stars—in our galaxy and others.
For more of a visual feel for the telescope, Rebecca Morelle of the BBC did a nice video tour, published in May.
One of the biggest constraints on exploration of the solar system is fuel. Spacecraft need fuel to get where they're going, and they need even more fuel in order to do what they're supposed to do once they arrive. Though energy (electricity) can be replenished for years (or even decades) with solar panels or RTGs, once you run out of reaction mass, your spacecraft is through. (If you're smart, you'll have suicided it into something well before then.)
Given how expensive it is to lift anything into space from the surface of the Earth, the future of efficient and affordable space travel may be dependent on using the resources that are already up there. Space may seem big and empty, and it mostly is, but there's enough raw material floating around out there in the form of asteroids and comets to keep us going for eons. The trick is going to be rounding these asteroids up and bringing them somewhere they can be of use without spending so much fuel on the process that the effort is rendered pointless.
Late in the evening of Saturday, Sept. 17, New York City residents living in Manhattan’s Chelsea district received some unsettling news via their cellphones. After hearing an emergency alert, they were informed that there had been a bomb blast in their neighborhood, and that they should stay inside and away from their windows until further notice.
An attack based on an Improvised Explosive Device, or IED—a hallmark of the wars in Iraq and Afghanistan—had once again wreaked havoc in a major U.S. city. The Chelsea explosion, which injured 29 people, was one of a series of attacks and related events that ended with the arrest of a 28-year-old suspect in Linden, New Jersey on Monday morning. The events began with a pipe-bomb explosion near the seashore in New Jersey, on Saturday morning, and continued on Saturday night with the Chelsea bombing, on West 23rd Street in Manhattan and the discovery of another bomb near the Chelsea site. On Sunday night, five other bombs were found in a backpack in Elizabeth, New Jersey. All of the bombs are believed to be the work of a single person.
For Col. Barry Shoop, head of the Department of Electrical Engineering and Computer Science at the U.S. Military Academy at West Point, the attacks were a grim confirmation of a long-held belief. “If we can’t solve this problem outside of the United States, we are going to see them [IEDs] inside the borders of the United States,” he said in an interview shortly after the arrest of the suspect. In a podcast interview with IEEE Spectrum in 2013, after an IED killed 3 and wounded 264 in Boston, Shoop pointed out that every month there were 400 to 500 IED “events” around the world, not including Afghanistan. (Shoop is also the current president of the IEEE.)
Brace yourself—the debate over what should and shouldn’t count as 5G has only begun. Defining the next generation of wireless networks is complicated, partly because engineers are developing so many exciting technologies at once and have yet to agree on the standards by which they will operate.
Within that murkiness is plenty of room for disagreement over how and where 5G will emerge. Stéphane Téral, an analyst at IHS Markit, recently weighed in by criticizing the use of “5G” to describe sub-6 gigahertz developments in a research note.
Radio waves in the sub-6 GHz range are considered the most desirable among carriers for delivering cellular signals because they can penetrate materials such as concrete and glass. Two ranges in particular—frequencies around 800 megahertz and 1.9 GHz—have long dominated the U.S. cellular landscape.
But these frequencies are becoming crowded as more users consume more data on more devices. And a bevy of other consumer technologies including Wi-Fi, Bluetooth, microwave ovens, and satellite radio operate at frequencies between 1.9 GHz and 6 GHz. So carriers have begun to browse higher frequencies for open bands that they can co-opt for cellular use.
Many have set their sights on much shorter millimeter waves that fall between 30 and 300 GHz. There are plenty of frequencies available in the millimeter-wave range, because they’ve been used only for specialized applications such as remote sensing and military radar. But waves at these frequencies can’t travel as far or make it through as many obstacles, so companies and researchers are still figuring out what it would mean to integrate them into future 5G networks.
“Obviously, the low latency and high bandwidth stuff like AR and VR will definitely benefit from millimeter wave,” says Anshel Sag, a 5G analyst for Moor Insight & Strategy. “But the way the technology works right now, it’s still pretty power hungry and requires a complicated array of antennas.”
Given the situation, Téral says it’s not surprising that carriers are also focused on finding more efficient ways to deliver data on lower sub-6 GHz frequencies. They’re improving their networks through technologies such as multiple input and multiple output (MIMO), in which carriers add antennas to existing 4G base stations to handle more traffic from more users at once.
In fact, some companies have begun to concentrate their 5G efforts on these kinds of sub-6 GHz improvements. Chinese smartphone manufacturer Huawei has said that sub-6GHz bands will be “the primary working frequency” for 5G, and Qualcomm recently announced a new 5G radio prototype focused on the same batch of frequencies.
But Téral is irked by companies who dub these developments 5G. He says only advancements at higher frequencies (those above 6 GHz) should count as “real 5G,” because they would represent a paradigm shift for improving data rates and latency on future wireless networks. He argues that sub-6 GHz improvements incorporated into existing 4G and 4G LTE networks are simply business as usual.
“The cellular guys want to use that spectrum to make a 5G claim, but this is not a dramatic move from where cellular is, from 700 MHz to 2.6 GHz,” he says. “You really want to call that 5G? It doesn't justify a generational jump.”
However, other experts say the importance of millimeter waves to 5G has been overstated, and key developments at lower frequencies, including the repurposing of TV white space, will play a significant role in enabling faster mobile connections, connected cars, and the Internet of Things.
Sag thinks it’s a mistake to rule out anything other than millimeter waves as true 5G. He says 5G New Radio, a wireless standard defined by the global wireless standards group 3GPP, should count as 5G no matter which frequencies it handles. Many others also envision future 5G networks as a blend of millimeter waves and sub-6 GHz technologies.
“I'm in the camp that doesn't believe that millimeter wave is the only way to do 5G,” Sag says. “In fact, I think it's the wrong way of doing 5G if you think of it as the only way of doing it.” Instead, Sag believes 5G will permeate every swatch of spectrum from the low frequencies used for NarrowBand IoT all the way up to high-frequency millimeter waves.
Téral admits progress in the sub-6 GHz range is an important first step in the “pre-5G” evolution of wireless. He also acknowledges that many of the potential uses that experts have dreamt up for 5G can and will be achieved through incremental improvements to 4G LTE networks. But he says he’d prefer to call those improvements “transitional 4G” instead. “There’s nothing new, and that’s the whole point,” he says.
To Sag, the matter of what counts as 5G is not just a theoretical debate: It could have a real impact on the trust that consumers place in carriers. “My biggest concern is kind of the same concern with 4G, in that the definitions get muddled and the consumers get confused,” he says.
Engineers at North Carolina State University and at Intel have come up with a solution to one of the modern microprocessor’s most persistent problems: communication between the processor’s many cores. Their answer is a dedicated set of logic circuits they call the Queue Management Device, or QMD. In simulations, integrating the QMD with the processor’s on-chip network, at a minimum, doubled core-to-core communication speed, and in some cases, boosted it much farther. Even better, as the number of cores was increased, the speed-up became more pronounced.
Taking advantage of technology developed to manipulate light on chips, a team based in Spain and Italy has created an integrated circuit that can be used to generate true random numbers by taking advantage of the thoroughly unpredictable nature of quantum mechanics.
The compact approach, which might one day find its way into smartphones and tablets, could be a boon for engineers hoping to keep financial transactions and other communications secure. Random numbers are a vital ingredient in the encryption schemes we rely on to secure data, and they’re also a powerful tool in computational modeling.
Today’s conventional random number generation is done using computer algorithms or physical hardware. A chip-based random number generator can, for example, use analog or digital circuits that are sensitive to random thermal fluctuations to generate unpredictable strings.
But even if these sources look quite random, it’s practically impossible to prove they are perfectly so, explains Valerio Pruneri of the Institute of Photonic Sciences in Spain. If you wait long enough—perhaps far longer than you’d care to wait—you may ultimately find there are correlations between numbers, ones that would ultimately allow you to crack the random-number-generation scheme.
Systems that obey the rules of quantum mechanics, by contrast, could be impossible nuts to crack. “Quantum physics, by definition, is fully unpredictable no matter what,” Pruneri says. “There is no way that somebody can guess future numbers based on current information.”
Quantum random number generators are nothing new; there are even commercialsystems available. But Pruneri and his colleagues decided to take aim at portability. They wanted to create something that could spit out random numbers at a high rate, but be small and energy-efficient enough that it could ultimately be integrated with microelectronics—perhaps in a package small enough to fit in a smartphone or tablet.
The chip they created takes advantage of standard fabrication techniques used to construct photonic integrated circuits. A small, pulsed indium phosphide laser is responsible for infusing the system with randomness. Below a certain energy threshold, a laser emits a small number of photons through a process called spontaneous emission, which creates light with random phase. This randomness impacts the ultimate phase of the light the laser emits when it’s above that threshold, once stimulated emission starts to dominate, Pruneri explains. The result is that, pulse to pulse, the laser light will have a random phase.
To transform these random phases into something usable, the pulsed light is mixed with light from a second indium phosphide laser on the chip. The phase of the first laser’s pulses will ultimate impact how light from the two laser sources interfere with one another, creating certain brightness differences that can be read out by a photodetector.
This quantum “entropy source” can be used to produce random numbers at a good clip—in the realm of a gigabit per second. The work appears online today in the journal Optica.
Pruneri says the next step is to integrate the chip with conventional CMOS electronics to turn the output of the system into random numbers that can be used by software. Here too, he expects the team will take advantage of photonics integrated circuit manufacturing techniques that have been built up over the years, in particular a way of pairing silicon and other materials, called hybrid integration.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.