The 2017 Hackaday Prize launched earlier this week. And the general theme is similar to what it’s been since the contest’s beginnings in 2014: “Build something that matters.” But the organizers are especially keen this year to inspire folks to design things that could turn into bona fide products—and profitable businesses.
“A business plan is part of engineering these days,” says Mike Szczys, Hackaday’s managing editor, who explains that the 2017 contest will include a US $30,000 prize for Best Product. To win that, you’ll have to provide a detailed bill of materials for your gizmo, show that it can be readily manufactured, and provide a compelling business plan. The three working prototypes of your brainchild you submit by 24 July will also have to impress the hands-on judges, who will include a hardware engineer, a product engineer, and several Hackaday.com editors.
That’s a lot to ask an aspiring garage tinkerer to pull together in a few months, but contestants aren’t restricted to submitting freshly minted ideas. Indeed, you could have been working on your concept—even selling commercial versions of the end product—for quite some time and still qualify. The criterion Hackaday will use is not how long the project has been around but how much money has been invested in it. If that figure is less than $2 million, the enterprise qualifies to compete for Hackaday’s Best Product prize in 2017.
Organizers didn’t offer this category in last year’s competition, although they had done so the year before. The 2015 competition drew fewer contestants for Best Product than desired, says Szczys, likely because already-existing products or projects were excluded.
Szczys hopes this year’s competition might inspire some folks who have already been working to develop and market a product to share with others the details of their entrepreneurial journeys. “Successful companies aren’t writing about the things that make them successful,” he says. The project documentation that Hackaday Prize contestants must provide could help to fill the gap.
And there are plenty of ways for you to compete even if your aspirations fall short of commercializing what you dream up. In addition to the aforementioned best-product prize, there will be a $50,000 grand prize, second- through fifth-place prizes that range from $20,000 to $5,000, and $1,000 prizes for the top 20 projects in each of five categories: “Design Your Concept,” “Internet of Useful Things,” “Wheels, Wings, and Walkers,” “Assistive Technologies,” and a catch-all called “Anything Goes.”
All in all, there will be $250,000 in cash prizes awarded to more than 100 entries by November, when Hackaday convenes its “superconference,” likely at the Pasadena design lab of SupplyFrame, its parent company. So gentlemen (and ladies), start your soldering irons, 3-D printers, and CNC machine tools. The game is afoot!
Opening around the world this weekend is Life, a “hard” science fiction movie set in the near future starring Jake Gyllenhaal, Ryan Reynolds, and Rebecca Ferguson. It opens when a sample return mission from Mars brings its payload back to a somewhat expanded International Space Station. A laboratory has been set up on the ISS to analyze the Martian samples in a bid to avoid any potential cross-contamination with terrestrial organisms [PDF]. Much to the joy of the crew, a dormant organism is found in the samples, proving that there is life beyond Earth. Very soon, however, things turn nasty and the crew starts pining for a return to cosmic loneliness.
Life was written by Rhett Reese and Paul Wernick, who came to prominence with last year’s unexpected smash hit Deadpool, which followed their earlier success with 2009’s Zombieland.IEEE Spectrum’s Stephen Cass talked with Reese and Wernick about how they used the real experiences of astronauts living and working in space to inform the plot of their latest movie and distinguish it from trapped-with-a-nasty-E.T. classics such as Alien or Pitch Black. (Very mild spoilers below. The conversation has been edited for concision and clarity.)
Accelerometers introduced smartphone users to many handy new features—recording the distance you walk and automatically rotating the view when we turn the phone sideways, to name just two you probably used today. Though these sensors are generally quite helpful and accurate, computer scientists from the University of Michigan have just found a way to scam them.
A research team figured out that they could fool accelerometers using sound waves—in particular, a single tone played at an accelerometer’s resonant frequency. With it, they can cause two signal processing components within the phone to generate a false report of the accelerometer’s behavior. The group is led by Kevin Fu of the University of Michigan and includes collaborators from the University of South Carolina.
Patrick McDaniel, a security researcher at Pennsylvania State University, says the security risk of the particular scheme devised by Fu’s group is low. But the broader problem is a big one in the industry: Devices and software programs tend to blindly trust any data gathered from built-in sensors.
On Tuesday, the Industrial Control Systems Cyber Emergency Response Team of the U.S. Department of Homeland Security issued a public alert about the findings.
The Michigan group’s work underscores the point that any device that relies on data from a sensor to make a critical decision can potentially be led astray by that sensor. Besides smartphones, accelerometers are also used to activate airbags in motor vehicles, and to measure the rate and depth of chest compressions during CPR.
“If you're trusting your sensor inputs and you have no way to validate those inputs, you're going to have problems,” McDaniel says.
The University of Michigan team tested 20 models of capacitive micro-electromechanical (MEMS) accelerometers from five manufacturers: Bosch, STMicroelectronics, InvenSense, Analog Devices, and Murata Manufacturing. (The model numbers are listed here.)
They found that 75 percent of the accelerometers could be fooled by an attack that allowed them to slightly alter the sensors’ signals for a brief moment, and 65 percent were vulnerable to a more severe attack that allowed the team to control their signals indefinitely.
Fu says he hopes the work—which he calls a proof of concept—will start a conversation in the industry: “We need to question, why do we trust our sensors?” he says.
Their trick was possible because sound waves impart a physical force to any object they encounter. The University of Michigan team essentially used the vibrations produced by sound waves to alter accelerometers’ records of what was happening to them.
An accelerometer contains a physical mass placed on springs. When a device moves, the mass does too. The movement causes the capacitance—the ability to store charge—to change in the springs, which can be interpreted as movement. By producing vibrations through sound waves that moved that mass in a particular way, the group launched a series of attacks on the unsuspecting sensors.
The group first had to identify the resonance, or preferred frequency, of each accelerometer. At the resonance frequency, each sound wave reinforces the action of the previous one on the mass—leading to a much larger signal than you’d get at other frequencies. To find the resonance of the accelerometers, the team played tones at progressive frequencies from 2 kilohertz to 30 kilohertz, until they found a frequency where the accelerometer produced an outsized reaction.
Next, the team subjected the sensors to two types of attacks using sound waves at the resonant frequency. The first, called output biasing, exploits a feature of the low pass filter, a signal processing component that filters out high frequency interference. This technique can be used to slightly alter readings produced by the accelerometer for several seconds.
The second, called output control, takes advantage of the phone’s amplifier, which typically handles the raw signal even before it reaches the low pass filter. This method can be used to take control of the accelerometer indefinitely and produce false signals.
To show that it was possible to spoof accelerometers with these techniques, the group used each method to spell out “WALNUT” in a chart of the sensors’ acceleration over time.
Next, they wanted to use sound waves to hack actual devices, so they reached for a Samsung Galaxy S5, which comes with an MPU-6500 accelerometer from InvenSense. They loaded a a music video with the accelerometer’s resonant frequency embedded in it, and remotely prompted the phone to play the video.
At the same time, they ran a game on the phone called Spy Toys that relies on the accelerometer to control a toy car. While the video played, the toy car accelerated or decelerated in accordance with the pulses of the signal they had embedded in the video.
In their final demo, they used an off-the-shelf speaker to play a tone that caused a FitBit to log 2,100 steps in just 40 minutes, earning them 21 reward points on a health tracking site (they declined to cash in their points, citing ethical concerns).
Though these scams are certainly possible, they are not subtle—the attacker must be within close range of the device they wish to target, and has to know the model and resonance of the accelerometer inside.
In the group’s own example, an attacker would have to stand behind the owner and blare an audio track to take control of the Spy Toys car, or somehow prompt the owner’s phone to start playing the resonance frequency—perhaps by sending them to a website that automatically plays the track once they arrive.
“It falls into that kind of cool, but not something that would keep me awake at night, type of vulnerability,” McDaniel says.
Still, to prevent any issues, Fu suggests accelerometer designers choose a resonance in the ultrasound range, which is more difficult to generate with off-the-shelf speakers. And encasing devices in foam is a good way to stop sound waves from reaching a device’s accelerometer, though not always practical.
Through the University of Michigan, the research team is also attempting to sell software programs to manufacturers that it says can prevent such attacks in products that are already on the market.
Just in case, McDaniel thinks entrepreneurs and consumers should be wary of turning too many decisions and responsibilities over to any devices that rely on sensor data, until the industry figures out how to better validate that data.
“If you're using that sensor input for a security critical decision, well then that's something we really need to worry about,” McDaniel says. “If we can't be sure they're trustable, we need to limit the kind of security decisions we're making off of them.”
A tiny company based in Barcelona is promoting a new technology that it hopes can revolutionize smartphone antennas—by removing them altogether.
The job of any smartphone antenna is to radiate a radio-frequency signal generated by the phone’s transmitter out to the nearest cellular base station or Wi-Fi router. Now, Fractus Antennas wants to replace that antenna with a much smaller component called an antenna booster—a tiny lightweight cube made of a metal frame and FR-4 epoxy, the same material used in printed circuit boards.
The company says it can use this booster, along with some modifications to the smartphone’s circuitry, to radiate RF signals exclusively from a device’s ground plane—with no dedicated antenna to speak of. According to the company, this approach can deliver performance comparable to today’s smartphone antennas, at a lower cost for manufacturers.
During this year’s Mobile World Congress, the smartphone industry’s largest trade show, Carles Puente, Fractus Antennas’ cofounder and vice-president for innovation, quietly wandered the exhibit halls and handed out samples of this antenna booster from his satchel to any smartphone maker who might be able to use it.
Back at his office near Barcelona, Puente compared it to a few antennas found in smartphones over the past decade. He pulled several devices from storage that were left over from the time his first company, Fractus, which also specializes in antennas, sued 10 manufacturers for patent infringement. He and his staff broke open more than 600 smartphones to build their case, so they’ve seen more than their share of internal antennas.
To explain how the booster technology from his new company, Fractus Antennas, works, he first showed me antennas from a 2008 Blackberry Pearl and a Pantech C740 from the same year. With the casing removed, it was easy to see that both models had what looked like a tangle of metallic lines toward the top of the device. Those squiggly patterns were mounted to plastic structures that gave the antennas a shape designed to help them radiate energy most effectively.
Puente told me these antennas are all inspired by fractals, a type of design in which similar patterns repeat themselves at various sizes. Fractals are naturally found in broccoli stalks and tree branches. With a fractal-based design, smartphone manufacturers can use all or just part of an antenna to provide service across many frequency bands.
For example, one of the longest wavelengths that smartphones must support is for the 698-megahertz frequency, where waves measure 430 millimeters long. And because the size of a radio wave corresponds to the size of the antenna needed to transmit it, longer wavelengths require larger antennas.
A typical smartphone antenna might only be 40- to 60-mm long, so transmitting waves that long requires the antenna’s entire surface. Since smartphones must also provide service across five or six other frequencies, smaller chunks of the same antenna are used to transmit those shorter wavelengths.
Fractus Antennas is a spinoff of Puente’s first company, which patented the use of fractal-based antennas in smartphones (and eventually filed that 2009 lawsuit for patent infringement). For many years, those were the dominant type of antenna found in smartphones.
More recently, manufacturers have moved away from fractal-based antennas and simply placed a metal band along the top of the smartphone to serve as an antenna. But one drawback of these metal bands is that they can’t easily support multiple frequencies at the same time on their own.
Manufacturers must add another part—an active tuner—to generate signals at the frequencies required for carriers around the globe. Still, this tuner is best at providing service at either one band or another, rather than over multiple bands at once.
Meanwhile, the industry is moving toward interband carrier aggregation, in which a device combines spectrum from several frequency bands to build a channel with more bandwidth than would otherwise be available. If metal band antennas can’t simultaneously provide service across bands, they may not be very useful as carrier aggregation becomes more popular.
This is where Fractus Antennas’ new, “antenna-less” smartphone technology comes in. Instead of relying on a dedicated antenna to radiate an RF signal, a handset would radiate the signal directly from the ground plane, which is the copper layer that underlies the phone’s printed circuit board. To do this, the phone’s manufacturer would replace the antenna with Fractus’smXTEND Antenna Booster, a small device roughly one-tenth the size of a traditional antenna.
It works like this: Once the transmitter generates a signal, it travels through the matching network, which is a part of the smartphone that acts like a tuner to support service at various frequencies. From there, it travels to both the booster and the ground plane. The booster is a passive device that does not radiate at all. Rather, it temporarily stores the signal it receives and repeatedly bounces it over to the ground plane, which radiates it out.
Already, today’s smartphones use the ground plane to radiate a portion of the signal that smartphones produce. To prevent interference, their circuit boards incorporate shields to protect elements that may be vulnerable. However, the Fractus Antennas concept takes this to the next level by using the ground plane to produce all of the radiation that is broadcast to the cell tower or Wi-Fi router. “Instead of having an antenna that radiates inside the phone, the phone itself is radiating,” Puente says.
The antenna booster does require a slightly more complicated matching network than usual. Puente says the matching network of a phone with an antenna booster would include six or seven components rather than the one or two found in a smartphone today—and the network must be redesigned for each model.
Fractus Antennas is now selling several versions of its antenna booster, which can support cellular communications across 12 frequency bands (from 698 MHz to 2690 MHz) and can also be adapted for Wi-Fi and Bluetooth. The company launched in 2015, and its first sales came in 2016.
So far, Fractus Antennas has sold hundreds of thousands of units to a dozen clients who are using them to track fleets of trucks and outfit sensors for smart metering, among other things. Right now, it costs Fractus Antennas about US $1 to produce each unit, but Puente expects they could reduce that cost considerably by producing higher volumes.
The company is not claiming that the booster improves performance; in the company’s tests, it has shown its performance to be similar to that of today’s smartphones. Puente believes its main selling point will be the money that smartphone makers can save by never having to design and manufacture their own antennas again.
If manufacturers sign on, Puente predicts that it may be 2018 before Fractus Antennas’ technology is available in a smartphone. That, he says, is thanks to the devices’ long development cycles. The 15-person company is an underdog in the industry that generates annual revenues of more than US $400 billion worldwide. But as Puente learned from his first company, a few strong patents can take a company far.
File this under “fun to think about”: Researchers at the University of Vienna have shown how ultraprecise measurements of time can bend the surrounding space and make time in the region run slower.
The basic theory is surprisingly simple, a combination of Einstein’s mass-energy equivalence and Heisenberg’s uncertainty principle. In short, increasing the precision in the time measurement increases the uncertainty in the energy at that point. Since energy and mass are interchangeable, this is the same as creating a virtual mass. As the uncertainty in the time measurement falls, the “mass” increases. And as the mass increases, so does its gravity. The result is a regional gravitational time dilation—the effect that causes clocks on Earth to run slower than clocks on Global Positioning System satellites, for example.
To be sure, the effect cannot be produced or detected with today’s tools. In their paper in the Proceedings of the National Academy of Sciences, the Vienna researchers—Esteban Castro Ruiz, Flaminia Giacomini, and Časlav Brukner of the university’s Vienna Center for Quantum Science and Technology—measure the time dilation effect in “decoherence time”: The greater the warping of space-time, the smaller the decoherence time. Today’s best atomic clocks are accurate to within about 3 x 10-18 (a dimensionless number, 3 parts in 10 quintillion). Measurements to this level of accuracy produce a “mass” equal to about one ten-millionth of proton. At a distance of one or two nuclear diameters, the decoherence time is on the order of the lifetime of the universe. It’s nothing to worry about in the real world.
Clock accuracy is increasing rapidly, however, and the pace of improvement is accelerating as optical clocks and optical-lattice clocks come online. The precision of optical-lattice clocks, in particular, has increased by about four orders of magnitude in the past decade. If clock accuracy climbs to about 10-27 (less than 15 years at the current rate of improvement), the mass uncertainty grows to about 7 x 1011 electronvolts (some 350 times the mass of the proton) producing a decoherence time on the order of two minutes at a distance of 10-10 meters. At this scale, the effect might become detectable.
"Our findings suggest that we need to re-examine our ideas about the nature of time when both quantum mechanics and general relativity are taken into account", said Castro in the university’s statement on the work.
Austin, Tx.—If you’ve ever wondered whether there is a single point of failure that could take the entire Internet down in one fell swoop, rest assured: experts (at least, experts participating in a panel discussion here at South by Southwest) say there is no such thing.
But even though the Internet does not possess a “kill switch”, so to speak, it does have plenty of vulnerabilities and limitations. Outages are commonplace, security is a perennial struggle, and governments can (and do) shut down access to specific services or even the entire Internet.
On Friday, the panel highlighted a range of limitations that still plague this network of networks and called for technically-minded people to participate in organizations such as ICANN to help improve resiliency, security, and connectivity.
As our networks grow with the Internet of Things, we could find ourselves more vulnerable to DDoS attacks, said Shane Tews of the American Enterprise Institute. “If we don’t have a good concept of how we’re going to be able to manage that traffic for good or for bad, that has to ability of being Dyn 100x,” Tews said. “There are people who do think that it was really just a shot across the bow as to what could be coming.”
Tews noted that there are changes afoot to try to make the system more secure. In 2010, the Domain Name System Security Extensions system was deployed. “The idea of domain name security…is basically to put a lock on a system that’s always been wide open,” Tews said, a holdover from the early days of the Internet when everyone knew one another. The master key for this system will be “rolled over”, or changed, for the first time in October.
“There are certain components of internet infrastructure that are not as resilient as others,” said Christian Dawson, executive director of the Internet Infrastructure Coalition, or i2Coalition. But he said challenges like attacks only serve to make the system more robust: “I don’t think it’s getting more risky. I think when people figure out how to push the right buttons to bring certain components down, it just makes us better at...realizing that taking the steps to get more resilience are necessary.”
The greatest stresses to the system are policies not technology, he said: “My issue always come down to the people, and that’s why we’re heavily involved in internet governance issues, making sure the right people are at the table so that people don’t make the wrong decisions because they don’t have the right information.”
“Almost all the challenges to the Internet are human,” Tews said, highlighting a YouTube censorship order in Pakistan in 2008 that ended up stretching well past the country’s borders.
Recently, he said, such large-scale shutdowns seem to be less frequent, but in their place are smaller-scale “blocks”, such as shutting down access in a particular region of India while students there are taking exams.
This sort of interference doesn’t garner the headlines that Egypt’s shutdown did. But it adds up. A report (pdf) from the Brookings Institution last year that highlighted 81 short-term shutdowns between July 1, 2015 and June 30, 2016 concluded the outages together cost upwards of US $2.4 billion in gross domestic product.
“My main concern right now is [that we are] moving toward a world where there [are] increasingly sophisticated small-scale blocks,” Perault said. “I would assume that the thing we would be most scared of would be a government being able to turn off your access to one particular product for 15 minutes. Because the ability to do that on a large scale might impose enough friction into your ability to access certain services that it would change your relationship to the Internet, and it would be very hard to diagnose.”
Countries that do not make access to the Internet a priority are also a limitation, Tews said. “Even though it’s not a kill switch, it’s certainly a killjoy.”
Then there is the power sovereign countries can exercise to hinder, prevent, or monitor the exchange of information. “There’s been a lot of talk of Internet fragmentation,” said Christopher Mondini of ICANN. But he said even restrictive countries and governments agree “that the Internet should remain an interconnected system with just one addressing system, and that a platform of ICANN and its multi-stakeholder discussion should be supported to maintain the interconnectivity of all of the networks.”
Mondini said there are a number of organizations, such as the Internet Society and its Internet Engineering Task Force as well as network operator groups, by which individuals can participate in setting the Internet’s path going forward: “You can find your people, and you can get involved and shape the future of the Internet, which is pretty exciting.”
If you’re heading to Austin this weekend for the festival of tech that is SXSW Interactive, you may have already browsed the massive conference schedule and been completely overwhelmed.
We’re here to help you find the signal in the noise. For guaranteed intellectual stimulation and geeky insights, come to IEEE Spectrum’s three panel discussions. They’re part of IEEE’s Tech for Humanity series of events, which includes not just talks but also meetups and a party on Sunday night (#partylikeanengineer).
Sunday March 12, 11 am – 12 pm, JW Marriott, Salon 5-6
On Sunday morning, come hear “father of the Internet” Vint Cerf describe his vision of a fully connected planet. IEEE Spectrum editor-in-chief Susan Hassler will interview Cerf about his People-Centered Internet project, which aims to bring 3 billion people online.
Monday March 13, 12:30 pm – 1:30 pm, JW Marriott, Salon 8
Bioengineers can now cobble together strings of DNA from scratch, enabling them to alter organisms or potentially build weird new ones. On Monday afternoon, IEEE Spectrum’s biomed editor Eliza Strickland moderates a debate on the future of synthetic biology. Andrew Hessel, an engineer at Autodesk, has proposed a genetics “moonshot” project to construct a completely synthetic human genome. Marcy Darnovsky, director of the Center for Genetics & Society, worries that this new kind of genetic manipulation may lead to high-tech eugenics.
Tuesday March 14, 11 am – 12 pm, JW Marriott, Salon 7
In 1965, Gordon Moore famously predicted that integrated circuits would get more powerful at an exponential rate as engineers found ways to jam more transistors onto the chips. But are we finally reaching the physical limit of Moore’s Law? IEEE Spectrum editor and semiconductor expert Rachel Courtland will host a lively discussion with experts who are envisioning radically new ways to improve computing power.
The latest WikiLeaks dump rattled the Internet this week with documents that appear to indicate the U.S. Central Intelligence Agency found a slew of software vulnerabilities that it potentially used to break into Apple and Android devices, and even turn Samsung smart TVs into secret microphones to eavesdrop on owners.
Those accusations have raised questions from security researchers about whether the U.S. government is following through on its commitment to responsibly disclose new software vulnerabilities (also known as zero-days) that it discovers in consumer products. The WikiLeaks documents show that the CIA appears to have built secret software programs to exploit many vulnerabilities that the agency found. Based on those records, the CIA possessed at least 14 Apple iOS exploits and around two dozen for Android devices, plus others that targeted Microsoft Windows and Linux.
Whether or not the CIA should disclose software vulnerabilities—and how quickly it should make a disclosure—is a complicated issue known as the “equities” problem. On the one hand, doing so allows companies to patch them, which protects citizens from nefarious parties who find the same vulnerabilities. On the other hand, this means the CIA loses its ability to gather intelligence through these holes.
Around the globe, telecom companies are placing bets on which technologies will define 5G, the next generation of wireless and cellular networks. Many have already published results from early lab tests and field trials. Now, executives are eager to move into real-world pilots and, gradually, to commercial deployments.
SK Telecom in South Korea, which has aggressively invested in 5G research, is considered an industry leader among carriers. The company serves 29 million customers, many of whom already receive some of the fastest and most reliable 4G coverage in the world. Those customers are densely packed into cities within a small country, which makes it easier for the company to deploy new equipment or upgrade its existing network.
Competitive forces are also at work. One of SK Telecom’s fiercest competitors, KT Corporation (formerly Korea Telecom) has pledged to launch 5G service in time for the upcoming Winter Olympics in South Korea; it was named the official telecom sponsor of the games, which will kick off in February 2018.
Last week at Mobile World Congress, Jong Han Park, a manager at SK Telecom’s 5G Tech Lab, spoke with IEEE Spectrum about the company’s outlook for 5G in 2017 and beyond. Park says that by the end of the year, SK Telecom will launch a 5G trial that will move out of the lab and into the real world, so that customers can directly experience new 5G service.
Anyone who follows 5G knows that SK’s plan could mean a lot of different things. Unlike the 4G LTE networks that many people primarily rely on for cell service today, 5G, engineers hope, will be used in many other ways.
SK Telecom’s first real-world trial could be to provide service for connected cars. It might bring fixed wireless access to people’s homes. Or it may deliver faster mobile service to smartphone customers. Right now, SK Telecom itself doesn’t even seem to really know. Park says the company is still working out the details, and that these possibilities “are all candidates” for the trial.
During the interview, he also hinted that early 5G service in South Korea might look very different from the first deployments planned in the United States and elsewhere. Those U.S. deployments will primarily involve fixed wireless access, in which Internet service is beamed over the air from a base station to an antenna in someone’s home.
Instead of going that route, Park says SK Telecom will take cues from its customers, who may prove most interested in paying for applications such as livestreaming virtual reality or using telepresence features. It's not yet clear how many customers might have access to the company’s 2017 trial.
“We think fixed wireless access is one of the services, but we call 5G a mobile network service—just like 4G,” Park says. “Without a mobile network, it’s not 5G. So we are thinking of mobility as one of the keys, even in the very initial phase of developing 5G trial services.”
Since international standards for 5G are still being set, any technologies used in the company’s 2017 trial will be considered pre-standard. SK Telecom intends to feed the results into the standards-setting process for 5G New Radio, the term for the new radio-to-air interface that will someday appear on base stations and wireless devices.
Park says his company will model the trial on the latest developments around 5G New Radio, which is scheduled to be more narrowly defined this year by the 3rd Generation Partnership Project. Once standardization is complete, Park estimates SK Telecom will need one year to test any new technology before it can be deployed, and he hopes to be ready to commercialize at least some 5G services in 2019.
Park says that one of the key 5G technologies that SK Telecom is developing is network slicing—a virtual technique that allows carriers to split up their network into more specialized pieces that can provide specific services.
For example, connected cars will most likely require a network with a lot more redundancy than tiny sensors that occasionally transmit data from a factory floor. With network slicing, a carrier would be able to reserve part of its network for cars, and another part for sensors—without risking interference between the two.
Park says this approach will enable SK Telecom to divide its network into “micro-functions” that it can offer to clients depending on their needs, instead of designing one massive 5G network to serve everyone. “The future network will be a lot more programmable than it is today, and a lot more dynamic,” he says.
Last year, SK Telecom announced that it was teaming up with Ericsson and Qualcomm on a 2017 trial for the yet-to-be-defined 5G New Radio, and shared information on some of the technologies the partners intend to test. They will start with MIMO, or multiple input multiple output, which is a fancy way of saying adding more antennas that can work together to both transmit and receive signals.
In addition to MIMO, they will also use a modulation technique known as orthogonal frequency-division multiplexing (OFDM) that is already common in Wi-Fi. With it, a data stream can be diced up and transmitted over several channels on different frequencies at once.
During Mobile World Congress, SK Telecom and Samsung also demonstrated five-band carrier aggregation on LTE-Advanced, which is a technique that borrows unused spectrum from other carriers in order to build larger bandwidths and deliver higher data rates to customers than would be possible if a carrier were to use their own spectrum exclusively. Park called this “a step towards 5G.”
IBM is building the first commercially available universal quantum computer, one it aims to have accessible via the cloud within the next five years to explore science and business applications, the company announced today.
Whereas classical computers switch transistors either on or off to represent data as ones and zeroes, quantum computers use quantum bits or "qubits" that, because of the surreal nature of quantum physics, essentially are simultaneously both 1 and 0. This means a qubit can perform multiple calculations at once. So quantum computers can solve certain problems much faster than classical computers.
In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the universe. IBM aims to construct a roughly 50-qubit system based on superconducting circuits. The upcoming quantum computer will be accessible online via IBM's Cloud platform.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.