Tech Talk iconTech Talk

A smartphone antenna rests on a platform in the middle of a chamber lined with large blue foam spikes and surrounded by a thick foam ring.

Fractus Antennas Pitches New “Antenna-less” Smartphone Technology

A tiny company based in Barcelona is promoting a new technology that it hopes can revolutionize smartphone antennas—by removing them altogether.

The job of any smartphone antenna is to radiate a radio-frequency signal generated by the phone’s transmitter out to the nearest cellular base station or Wi-Fi router. Now, Fractus Antennas wants to replace that antenna with a much smaller component called an antenna booster—a tiny lightweight cube made of a metal frame and FR-4 epoxy, the same material used in printed circuit boards.

The company says it can use this booster, along with some modifications to the smartphone’s circuitry, to radiate RF signals exclusively from a device’s ground plane—with no dedicated antenna to speak of. According to the company, this approach can deliver performance comparable to today’s smartphone antennas, at a lower cost for manufacturers.

During this year’s Mobile World Congress, the smartphone industry’s largest trade show, Carles Puente, Fractus Antennas’ cofounder and vice-president for innovation, quietly wandered the exhibit halls and handed out samples of this antenna booster from his satchel to any smartphone maker who might be able to use it.  

Back at his office near Barcelona, Puente compared it to a few antennas found in smartphones over the past decade. He pulled several devices from storage that were left over from the time his first company, Fractus, which also specializes in antennas, sued 10 manufacturers for patent infringement. He and his staff broke open more than 600 smartphones to build their case, so they’ve seen more than their share of internal antennas.  

To explain how the booster technology from his new company, Fractus Antennas, works, he first showed me antennas from a 2008 Blackberry Pearl and a Pantech C740 from the same year. With the casing removed, it was easy to see that both models had what looked like a tangle of metallic lines toward the top of the device. Those squiggly patterns were mounted to plastic structures that gave the antennas a shape designed to help them radiate energy most effectively.   

Puente told me these antennas are all inspired by fractals, a type of design in which similar patterns repeat themselves at various sizes. Fractals are naturally found in broccoli stalks and tree branches. With a fractal-based design, smartphone manufacturers can use all or just part of an antenna to provide service across many frequency bands.

For example, one of the longest wavelengths that smartphones must support is for the 698-megahertz frequency, where waves measure 430 millimeters long. And because the size of a radio wave corresponds to the size of the antenna needed to transmit it, longer wavelengths require larger antennas.

A typical smartphone antenna might only be 40- to 60-mm long, so transmitting waves that long requires the antenna’s entire surface. Since smartphones must also provide service across five or six other frequencies, smaller chunks of the same antenna are used to transmit those shorter wavelengths.

Fractus Antennas is a spinoff of Puente’s first company, which patented the use of fractal-based antennas in smartphones (and eventually filed that 2009 lawsuit for patent infringement). For many years, those were the dominant type of antenna found in smartphones.

More recently, manufacturers have moved away from fractal-based antennas and simply placed a metal band along the top of the smartphone to serve as an antenna. But one drawback of these metal bands is that they can’t easily support multiple frequencies at the same time on their own.

Manufacturers must add another part—an active tuner—to generate signals at the frequencies required for carriers around the globe. Still, this tuner is best at providing service at either one band or another, rather than over multiple bands at once.

Meanwhile, the industry is moving toward interband carrier aggregation, in which a device combines spectrum from several frequency bands to build a channel with more bandwidth than would otherwise be available. If metal band antennas can’t simultaneously provide service across bands, they may not be very useful as carrier aggregation becomes more popular.

This is where Fractus Antennas’ new, “antenna-less” smartphone technology comes in. Instead of relying on a dedicated antenna to radiate an RF signal, a handset would radiate the signal directly from the ground plane, which is the copper layer that underlies the phone’s printed circuit board. To do this, the phone’s manufacturer would replace the antenna with Fractus’s mXTEND Antenna Boostera small device roughly one-tenth the size of a traditional antenna. 

It works like this: Once the transmitter generates a signal, it travels through the matching network, which is a part of the smartphone that acts like a tuner to support service at various frequencies. From there, it travels to both the booster and the ground plane. The booster is a passive device that does not radiate at all. Rather, it temporarily stores the signal it receives and repeatedly bounces it over to the ground plane, which radiates it out.

Already, today’s smartphones use the ground plane to radiate a portion of the signal that smartphones produce. To prevent interference, their circuit boards incorporate shields to protect elements that may be vulnerable. However, the Fractus Antennas concept takes this to the next level by using the ground plane to produce all of the radiation that is broadcast to the cell tower or Wi-Fi router. “Instead of having an antenna that radiates inside the phone, the phone itself is radiating,” Puente says.

The antenna booster does require a slightly more complicated matching network than usual. Puente says the matching network of a phone with an antenna booster would include six or seven components rather than the one or two found in a smartphone today—and the network must be redesigned for each model.  

Fractus Antennas is now selling several versions of its antenna booster, which can support cellular communications across 12 frequency bands (from 698 MHz to 2690 MHz) and can also be adapted for Wi-Fi and Bluetooth. The company launched in 2015, and its first sales came in 2016.

So far, Fractus Antennas has sold hundreds of thousands of units to a dozen clients who are using them to track fleets of trucks and outfit sensors for smart metering, among other things. Right now, it costs Fractus Antennas about US $1 to produce each unit, but Puente expects they could reduce that cost considerably by producing higher volumes.

The company is not claiming that the booster improves performance; in the company’s tests, it has shown its performance to be similar to that of today’s smartphones. Puente believes its main selling point will be the money that smartphone makers can save by never having to design and manufacture their own antennas again.

If manufacturers sign on, Puente predicts that it may be 2018 before Fractus Antennas’ technology is available in a smartphone. That, he says, is thanks to the devices’ long development cycles. The 15-person company is an underdog in the industry that generates annual revenues of more than US $400 billion worldwide. But as Puente learned from his first company, a few strong patents can take a company far.

Measuring time at one point can alter the flow of time in the surrounding space

Can Ultraprecise Time Measurements Warp Space?

File this under “fun to think about”: Researchers at the University of Vienna have shown how ultraprecise measurements of time can bend the surrounding space and make time in the region run slower.

The basic theory is surprisingly simple, a combination of Einstein’s mass-energy equivalence and Heisenberg’s uncertainty principle. In short, increasing the precision in the time measurement increases the uncertainty in the energy at that point. Since energy and mass are interchangeable, this is the same as creating a virtual mass. As the uncertainty in the time measurement falls, the “mass” increases. And as the mass increases, so does its gravity. The result is a regional gravitational time dilation—the effect that causes clocks on Earth to run slower than clocks on Global Positioning System satellites, for example.

To be sure, the effect cannot be produced or detected with today’s tools. In their paper in the Proceedings of the National Academy of Sciences, the Vienna researchers—Esteban Castro Ruiz, Flaminia Giacomini, and Časlav Brukner of the university’s Vienna Center for Quantum Science and Technology—measure the time dilation effect in “decoherence time”: The greater the warping of space-time, the smaller the decoherence time. Today’s best atomic clocks are accurate to within about 3 x 10-18 (a dimensionless number, 3 parts in 10 quintillion). Measurements to this level of accuracy produce a “mass” equal to about one ten-millionth of proton. At a distance of one or two nuclear diameters, the decoherence time is on the order of the lifetime of the universe. It’s nothing to worry about in the real world.

Clock accuracy is increasing rapidly, however, and the pace of improvement is accelerating as optical clocks and optical-lattice clocks come online. The precision of optical-lattice clocks, in particular, has increased by about four orders of magnitude in the past decade. If clock accuracy climbs to about 10-27 (less than 15 years at the current rate of improvement), the mass uncertainty grows to about 7 x 1011 electronvolts (some 350 times the mass of the proton) producing a decoherence time on the order of two minutes at a distance of 10-10 meters. At this scale, the effect might become detectable.

"Our findings suggest that we need to re-examine our ideas about the nature of time when both quantum mechanics and general relativity are taken into account", said Castro in the university’s statement on the work.

Image of SXSW panelists

SXSW 2017: If the Internet Has a Kill Switch, We Are It

Austin, Tx.—If you’ve ever wondered whether there is a single point of failure that could take the entire Internet down in one fell swoop, rest assured: experts (at least, experts participating in a panel discussion here at South by Southwest) say there is no such thing.

But even though the Internet does not possess a “kill switch”, so to speak, it does have plenty of vulnerabilities and limitations. Outages are commonplace, security is a perennial struggle, and governments can (and do) shut down access to specific services or even the entire Internet. 

On Friday, the panel highlighted a range of limitations that still plague this network of networks and called for technically-minded people to participate in organizations such as ICANN to help improve resiliency, security, and connectivity. 

Two recent events highlight some of the issues we face. In October, a botnet disrupted access to a range of services including Twitter and Netflix through distributed denial-of-service (DDoS) attacks on the Managed Domain Name System of internet services company Dyn. And just last week, an improperly inputted command took down a number of Amazon’s widely used AWS services.  

As our networks grow with the Internet of Things, we could find ourselves more vulnerable to DDoS attacks, said Shane Tews of the American Enterprise Institute. “If we don’t have a good concept of how we’re going to be able to manage that traffic for good or for bad, that has to ability of being Dyn 100x,” Tews said. “There are people who do think that it was really just a shot across the bow as to what could be coming.”

Tews noted that there are changes afoot to try to make the system more secure. In 2010, the Domain Name System Security Extensions system was deployedThe idea of domain name security…is basically to put a lock on a system that’s always been wide open,” Tews said, a holdover from the early days of the Internet when everyone knew one another. The master key for this system will be “rolled over”, or changed, for the first time in October.

“There are certain components of internet infrastructure that are not as resilient as others,” said Christian Dawson, executive director of the Internet Infrastructure Coalition, or i2Coalition. But he said challenges like attacks only serve to make the system more robust: “I don’t think it’s getting more risky. I think when people figure out how to push the right buttons to bring certain components down, it just makes us better at...realizing that taking the steps to get more resilience are necessary.”

The greatest stresses to the system are policies not technology, he said: “My issue always come down to the people, and that’s why we’re heavily involved in internet governance issues, making sure the right people are at the table so that people don’t make the wrong decisions because they don’t have the right information.”

“Almost all the challenges to the Internet are human,” Tews said, highlighting a YouTube censorship order in Pakistan in 2008 that ended up stretching well past the country’s borders. 

The idea of vulnerability at the human level was echoed by Matt Perault, head of global policy development at Facebook. Perault started at the company the week that Egypt turned off access to the Internet for five days.

Recently, he said, such large-scale shutdowns seem to be less frequent, but in their place are smaller-scale “blocks”, such as shutting down access in a particular region of India while students there are taking exams.

This sort of interference doesn’t garner the headlines that Egypt’s shutdown did. But it adds up. A report (pdf) from the Brookings Institution last year that highlighted 81 short-term shutdowns between July 1, 2015 and June 30, 2016 concluded the outages together cost upwards of US $2.4 billion in gross domestic product.

“My main concern right now is [that we are] moving toward a world where there [are] increasingly sophisticated small-scale blocks,” Perault said. “I would assume that the thing we would be most scared of would be a government being able to turn off your access to one particular product for 15 minutes. Because the ability to do that on a large scale might impose enough friction into your ability to access certain services that it would change your relationship to the Internet, and it would be very hard to diagnose.”

Countries that do not make access to the Internet a priority are also a limitation, Tews said. “Even though it’s not a kill switch, it’s certainly a killjoy.”

Then there is the power sovereign countries can exercise to hinder, prevent, or monitor the exchange of information. “There’s been a lot of talk of Internet fragmentation,” said Christopher Mondini of ICANN. But he said even restrictive countries and governments agree “that the Internet should remain an interconnected system with just one addressing system, and that a platform of ICANN and its multi-stakeholder discussion should be supported to maintain the interconnectivity of all of the networks.”

Mondini said there are a number of organizations, such as the Internet Society and its Internet Engineering Task Force as well as network operator groups, by which individuals can participate in setting the Internet’s path going forward: “You can find your people, and you can get involved and shape the future of the Internet, which is pretty exciting.”

Text on blue background reads IEEE Tech for Humanity Series at SXSW

At SXSW, Don't Miss These IEEE Spectrum Events

If you’re heading to Austin this weekend for the festival of tech that is SXSW Interactive, you may have already browsed the massive conference schedule and been completely overwhelmed.

We’re here to help you find the signal in the noise. For guaranteed intellectual stimulation and geeky insights, come to IEEE Spectrum’s three panel discussions. They’re part of IEEE’s Tech for Humanity series of events, which includes not just talks but also meetups and a party on Sunday night (#partylikeanengineer).  

Headshot of man with white hair and beard, text says Vint Cerf: An Internet for and by The People

Vint Cerf: An Internet for and by The People

Sunday March 12, 11 am – 12 pm, JW Marriott, Salon 5-6

On Sunday morning, come hear “father of the Internet” Vint Cerf describe his vision of a fully connected planet. IEEE Spectrum editor-in-chief Susan Hassler will interview Cerf about his People-Centered Internet project, which aims to bring 3 billion people online.

Illustration shows a twisting helix of DNA, text below reads Engineering Life: Artificial Genome Sequence

Engineering Life: Artificial Genome Synthesis

Monday March 13, 12:30 pm – 1:30 pm, JW Marriott, Salon 8

Bioengineers can now cobble together strings of DNA from scratch, enabling them to alter organisms or potentially build weird new ones. On Monday afternoon, IEEE Spectrum’s biomed editor Eliza Strickland moderates a debate on the future of synthetic biology. Andrew Hessel, an engineer at Autodesk, has proposed a genetics “moonshot” project to construct a completely synthetic human genome. Marcy Darnovsky, director of the Center for Genetics & Society, worries that this new kind of genetic manipulation may lead to high-tech eugenics. 

Illustration shows a computer chip with text below reading Going Beyond Moore's Law

Going Beyond Moore’s Law

Tuesday March 14, 11 am – 12 pm, JW Marriott, Salon 7 

In 1965, Gordon Moore famously predicted that integrated circuits would get more powerful at an exponential rate as engineers found ways to jam more transistors onto the chips. But are we finally reaching the physical limit of Moore’s Law? IEEE Spectrum editor and semiconductor expert Rachel Courtland will host a lively discussion with experts who are envisioning radically new ways to improve computing power. 

A figure in black inputs commands into a computer screen in the corner of an industrial facility.

After WikiLeaks, a Recap of How the U.S. Government Discloses a Zero-Day

The latest WikiLeaks dump rattled the Internet this week with documents that appear to indicate the U.S. Central Intelligence Agency found a slew of software vulnerabilities that it potentially used to break into Apple and Android devices, and even turn Samsung smart TVs into secret microphones to eavesdrop on owners.  

Those accusations have raised questions from security researchers about whether the U.S. government is following through on its commitment to responsibly disclose new software vulnerabilities (also known as zero-days) that it discovers in consumer products. The WikiLeaks documents show that the CIA appears to have built secret software programs to exploit many vulnerabilities that the agency found. Based on those records, the CIA possessed at least 14 Apple iOS exploits and around two dozen for Android devices, plus others that targeted Microsoft Windows and Linux.

Whether or not the CIA should disclose software vulnerabilities—and how quickly it should make a disclosure—is a complicated issue known as the “equities” problem. On the one hand, doing so allows companies to patch them, which protects citizens from nefarious parties who find the same vulnerabilities. On the other hand, this means the CIA loses its ability to gather intelligence through these holes.

Read More
SK Telecom talks about its 5G plans

SK Telecom Exec Talks 5G Tech, 2017 Trials

Around the globe, telecom companies are placing bets on which technologies will define 5G, the next generation of wireless and cellular networks. Many have already published results from early lab tests and field trials. Now, executives are eager to move into real-world pilots and, gradually, to commercial deployments.

SK Telecom in South Korea, which has aggressively invested in 5G research, is considered an industry leader among carriers. The company serves 29 million customers, many of whom already receive some of the fastest and most reliable 4G coverage in the world. Those customers are densely packed into cities within a small country, which makes it easier for the company to deploy new equipment or upgrade its existing network.

Competitive forces are also at work. One of SK Telecom’s fiercest competitors, KT Corporation (formerly Korea Telecom) has pledged to launch 5G service in time for the upcoming Winter Olympics in South Korea; it was named the official telecom sponsor of the games, which will kick off in February 2018.

Last week at Mobile World Congress, Jong Han Parka manager at SK Telecom’s 5G Tech Lab, spoke with IEEE Spectrum about the company’s outlook for 5G in 2017 and beyond. Park says that by the end of the year, SK Telecom will launch a 5G trial that will move out of the lab and into the real world, so that customers can directly experience new 5G service.

Anyone who follows 5G knows that SK’s plan could mean a lot of different things. Unlike the 4G LTE networks that many people primarily rely on for cell service today, 5G, engineers hope, will be used in many other ways.

SK Telecom’s first real-world trial could be to provide service for connected cars. It might bring fixed wireless access to people’s homes. Or it may deliver faster mobile service to smartphone customers. Right now, SK Telecom itself doesn’t even seem to really know. Park says the company is still working out the details, and that these possibilities “are all candidates” for the trial.

During the interview, he also hinted that early 5G service in South Korea might look very different from the first deployments planned in the United States and elsewhere. Those U.S. deployments will primarily involve fixed wireless access, in which Internet service is beamed over the air from a base station to an antenna in someone’s home.

Instead of going that route, Park says SK Telecom will take cues from its customers, who may prove most interested in paying for applications such as livestreaming virtual reality or using telepresence features. It's not yet clear how many customers might have access to the company’s 2017 trial.

“We think fixed wireless access is one of the services, but we call 5G a mobile network service—just like 4G,” Park says. “Without a mobile network, it’s not 5G. So we are thinking of mobility as one of the keys, even in the very initial phase of developing 5G trial services.”

Since international standards for 5G are still being set, any technologies used in the company’s 2017 trial will be considered pre-standard. SK Telecom intends to feed the results into the standards-setting process for 5G New Radio, the term for the new radio-to-air interface that will someday appear on base stations and wireless devices.

Park says his company will model the trial on the latest developments around 5G New Radio, which is scheduled to be more narrowly defined this year by the 3rd Generation Partnership Project. Once standardization is complete, Park estimates SK Telecom will need one year to test any new technology before it can be deployed, and he hopes to be ready to commercialize at least some 5G services in 2019.   

Park says that one of the key 5G technologies that SK Telecom is developing is network slicing—a virtual technique that allows carriers to split up their network into more specialized pieces that can provide specific services.

For example, connected cars will most likely require a network with a lot more redundancy than tiny sensors that occasionally transmit data from a factory floor. With network slicing, a carrier would be able to reserve part of its network for cars, and another part for sensors—without risking interference between the two.

Park says this approach will enable SK Telecom to divide its network into “micro-functions” that it can offer to clients depending on their needs, instead of designing one massive 5G network to serve everyone. “The future network will be a lot more programmable than it is today, and a lot more dynamic,” he says.

Last year, SK Telecom announced that it was teaming up with Ericsson and Qualcomm on a 2017 trial for the yet-to-be-defined 5G New Radio, and shared information on some of the technologies the partners intend to test. They will start with MIMO, or multiple input multiple output, which is a fancy way of saying adding more antennas that can work together to both transmit and receive signals.

In addition to MIMO, they will also use a modulation technique known as orthogonal frequency-division multiplexing (OFDM) that is already common in Wi-Fi. With it, a data stream can be diced up and transmitted over several channels on different frequencies at once.

During Mobile World Congress, SK Telecom and Samsung also demonstrated five-band carrier aggregation on LTE-Advanced, which is a technique that borrows unused spectrum from other carriers in order to build larger bandwidths and deliver higher data rates to customers than would be possible if a carrier were to use their own spectrum exclusively. Park called this “a step towards 5G.”

Two women surrounded by computing equipment study a device made from tubes and wires

IBM Expanding Cloud Quantum Computer Tenfold

IBM is building the first commercially available universal quantum computer, one it aims to have accessible via the cloud within the next five years to explore science and business applications, the company announced today.

Whereas classical computers switch transistors either on or off to represent data as ones and zeroes, quantum computers use quantum bits or "qubits" that, because of the surreal nature of quantum physics, essentially are simultaneously both 1 and 0. This means a qubit can perform multiple calculations at once. So quantum computers can solve certain problems much faster than classical computers. 

 In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the universe. IBM aims to construct a roughly 50-qubit system based on superconducting circuits. The upcoming quantum computer will be accessible online via IBM's Cloud platform.

Read More
IoT-to-blockchain proof of concept is a luxury yacht

Mobile World Congress 2017: Startup Ubirch Sails the Blockchain Into a New Application—IoT

As Karim Attia steered his yacht, Serenity, from its slot in Barcelona’s historic Port Vell on Wednesday evening, two small onboard sensors dutifully recorded the temperature, humidity, and air pressure from around the vessel. Then, every two minutes, they logged that information in the Bitcoin blockchain, creating a permanent and unimpeachable record of the journey.

By the time the sun had set and Serenity slid back into her slot, the sensors had made 507 entries to the blockchain that would be virtually impossible for anyone else to change. If any of the passengers were to dispute the details of the trip, or if the coast guard were to investigate the vessel following an accident, the blockchain would reflect the true story.

Attia is a seed investor in a startup called ubirch, which developed the open-source software that captured the data from his yacht and entered it into the blockchain earlier this week. By doing so, the company created a record that, at least in theory, can never be edited or erased.

Increasingly, blockchains are being used in ways that have little to do with Bitcoin, and a lot more to do with embedding data for other purposes. One such application is the growing need to manage the deluge of data created by the connected sensors, gadgets, and appliances loosely defined as the Internet of Things.

In a nutshell, Ubirch wants to leverage the Bitcoin blockchain to secure all that data; to show how this would work, Attia lent his yacht and sailing skills to ubirch's founders for an onboard demo during Mobile World Congress.

There’s one important feature of the blockchain that makes this whole idea possible. Whenever someone makes a Bitcoin transaction, they can also attach a small amount—up to 40 bytes—of data to each entry, through a function known as OP_RETURN. Using that function, ubirch can store data on behalf of its clients.

To immortalize information in this way, ubirch must first make a small payment in order to initiate a Bitcoin transaction for each event it wishes to log. In the company’s model, that payment becomes less about Bitcoin itself and more about compensating miners, who are the custodians of the blockchain, for the services they provide in adding new entries and verifying the information within the updates.

Obviously, even the small amounts of money paid to embed data in the blockchain can add up. To reduce the cost for clients, ubirch actually records data in two steps. First, it adds every single data point from its clients’ sensors to the company’s own private blockchain. Then, it collects these data points into bunches, and adds these bunches to the Bitcoin blockchain every 10 minutes or so. Whenever ubirch adds a bunch of new data to the blockchain, it receives a hash (a string of numbers) that it can use to look up the entry to see how many miners have confirmed the transaction. This code is also good for verifying that the data has not been changed.

In order to pull all of this off, ubirch developed a program capable of: taking data points from sensors and translate that data into hashes for the ledger of its private blockchain and combining several of those hashes into entries in the Bitcoin blockchain. The upshot, says the company, is that this approach means clients pay only fractions of a penny for each data point logged by their sensors, instead of the full cost of a Bitcoin transaction.

Ubirch’s system can capture information from IoT sensors attached to pretty much anything a person may want to monitor. Aboard Serenity, two sensors recorded humidity, temperature, and air pressure, and reported the data once every two minutes.

One sensor (yellow on the chart) sent data from the motor room just beneath the stairs that descend to the cabin; the other (green) one transmitted from on deck near the cockpit, in the open well where passengers can recline with a glass of wine.

As the yacht set sail, the temperature on deck dropped and the temperature in the motor room rose, as one might expect. Both sensors transmitted their data to ubirch over GSM, an early cellular and satellite network with reliable global coverage.

In addition to logging data to the blockchain, ubirch’s software also uses public-private encryption to verify that data has come from a specific sensor, and to protect the data as it travels to ubirch. Each sensor has its own private key, based on a mathematical formula, that it uses to encrypt the data it sends—which can then be decrypted using a public key it has previously shared with ubirch.

These security measures require the sensor itself to have an encryption layer. Ubirch provides this layer in its own sensors and can also apply it to most off-the-shelf sensors. (It prefers those models with an ARM Cortex-M0 processor or better.) Within that layer, the company uses a specific type of public-private encryption known as elliptic curve cryptography, which relies on the obscure mathematics of elliptic curves—rather than more common factoring algorithms—to generate 265-bit keys.

Michael Merz, co-founder and chief technology officer, says encryption is a critical step that many IoT manufacturers simply ignore, or sidestep with features such as firewalls that defend the network or device rather than keep the actual data secure. “It’s not the question to protect the device—the question is protecting the data against misuse and manipulation,” he says.

Stephan Noller, ubirch’s founder and CEO, says other startups are also using the ability to embed data on the blockchain as a business model, but they do not offer encryption of the data itself. “If the sensor itself is not secure, logging it into the blockchain is pointless because that information is not secure,” he says.

Though ubirch has worked with only a few clients thus far, ubirch’s founders have no shortage of ideas for how their technology might be used. The sweet spot seems to be cases where financial decisions are made based on specific data, and both the payer and recipient must agree that the data is valid.

Merz says an insurance company could base a customer’s car insurance payments on sensors that record driving habits. With ubirch’s system, both parties could be confident that the record is accurate. In fact, one of the startup’s first clients was an insurance company that wanted to place temperature sensors on electrical boxes, to record malfunctions that may lead to building fires.

For now, ubirch has 10 employees based in Cologne and Berlin, Germany. The company has raised between $400,000 and $500,000 in its first seed round, and is considering pursuing another seed round to boost it to a total of $1 million.

Attia, the seed investor and yacht owner, is enthusiastic about the company’s prospects. “IoT is happening everywhere, but the focus on security is the differentiator from everyone else,” he says.

Noller knows the importance of that security from personal experience. He had always enjoyed connecting items in his home to the Internet until, one day, a hacker infiltrated his system and turned off all the lights, closed the shades, and disabled the fire alarm. The hacker even shut off the heat to a warming unit for the family’s pet rabbit. “That puts the security risk very much in front of you,” he says.

Four men and two women seated on a raised platform

Corporate Titans Unite to Build an Enterprise Version of the Ethereum Blockchain

Over the last two years, Ethereum, a blockchain-based platform for self-executing contracts and other applications, has attracted many admirers in the corporate world. The platform—a peer-to-peer network similar to the one that runs Bitcoin—is already supporting decentralized prediction markets, autonomously run lotteries, and a host of governance tools. And there is much speculation that the same building blocks could be used to replicate and streamline services offered by financial institutions, insurance agencies, and health-care providers, to name but a few. 

Most of the of the titans in these industries, however, have kept their distance, watching this radical new technology take shape while waiting to see whether it would embrace the needs of the companies that might want to build on it.

On Tuesday, some of the biggest names in the sectors of finance and softwareincluding Intel, Microsoft, J.P. Morgan, and Accenture—finally stepped into the fray, announcing their intention to work together on a private version of the Ethereum blockchain. Their efforts will be coordinated under the auspices of a new nonprofit organization called the Enterprise Ethereum Alliance, which will guide the engineering of a standard blockchain technology based on the Ethereum blockchain and customized for the needs of all enterprise members. 

Read More
The components of a new integrated luminaire by PureLiFi are shown from a side view, with a black ring around the light itself.

Mobile World Congress 2017: PureLiFi Debuts New Li-Fi Luminaire and Shares Progress on Commercial Pilots

Most startups dream of someday making their mark on a single industry. If all goes according to plan, PureLiFi will stir up two industries at once. As the world’s leading Li-Fi startup, the company has developed technology that delivers high-speed wireless Internet service through the LED lights found in homes and offices.

This week at Mobile World Congress, an annual gathering for the smartphone industry held this year in Barcelona, PureLiFi revealed a sleek new integrated Li-Fi luminaire and spoke to IEEE Spectrum about its plans to expand commercial pilots in the months ahead. The company’s leaders also opened up about their eagerness to ditch the Li-Fi–enabled USB dongle they debuted at the same trade show just last year.

PureLiFi, which is based in Edinburgh, has developed a technique that uses flashes of light to transmit information, instead of encoding data on a radio signal that is broadcast over the air, as done by today’s Wi-Fi routers. The flashes are so fast that they are not noticeable to the human eye. (For more on how Li-Fi works, read this story by IEEE’s The Institute.)

But so far, people can only access PureLiFi’s technology by plugging in a dongle to their laptop, which many users find inconvenient. Last year at MWC, PureLiFi revealed the world’s first Li-Fi USB dongle, called LiFi-X. The credit-card–sized dongle contains a photoreceptor that receives a signal from a Li-Fi–enabled light, and an infrared transmitter to send data back.

Based on what the PureLiFi team shared, connecting to Li-Fi in 2017 should be less clunky and more convenient. The company says this year, it will shrink its dongle to half of its current size and 40 percent of its weight.

Of course, the dongle is only part of the equation. For this year’s show, PureLiFi unveiled a new luminaire that integrates Li-Fi modulation components into a black ring designed to encircle an LED light. The ring lays flush with the ceiling and makes Li-Fi-enabled LEDs virtually indistinguishable from any other lights in a home or office.

Previously, the company’s only Li-Fi access point was a white box, roughly the size of a smartphone, that had to be mounted to the ceiling next to an LED light.

In a demo on Monday, Nikola Serafimovski, director of business strategy for PureLiFi, plugged the LiFi-X dongle into a laptop and connected to the Li-Fi network by selecting it on his screen, just as a user might choose their WiFi network. He then began to play a YouTube video and walked from below one light fixture equipped with the company’s original access point to one featuring the new integrated luminaire, demonstrating that the technology was capable of a seamless handoff as a user moves around a room or building. He placed his thumb over the dongle’s receiver to momentarily block the connection, proving it was live.

Speed is one of the reasons that so many companies are intrigued by Li-Fi. When Harald Haas, founder and Chief Science Officer of PureLiFi and a professor at University of Edinburgh, first began working on Li-Fi, he achieved a peak data rate of 10 megabits per second with some basic modulation. Now, with the help of new techniques, PureLiFi can achieve peak rates of 15 gigabits per second.

One of those techniques is wavelength-division multiplexing, which allows the company to separately encode data on the red, green, and blue channels within a white LED. With it, Haas says 100 Gbps rates are on the horizon.

In real-world settings, the company’s products deliver data rates of about 45 Mbps for both uploads and downloads. For comparison, the average broadband Internet connection in the U.S. offers download speeds of 54 Mbps, while mobile Internet service provides about 20 Mbps. Upload speeds are significantly slower.

There are some special considerations that come with using Li-Fi, such as the fact that light doesn’t penetrate walls, so every room in a house would need a Li-Fi–enabled lighting fixture. However, PureLiFi likes to pitch this factor as a feature that also keeps users’ data secure, since communications are inaccessible to anyone who is not in the same room.

At least in the eyes of Alistair Banham, CEO of PureLiFi, the company’s early trial participants have so far responded positively to the technology. “The major feedback is that they love the security of information,” he says. “The feedback on the speed has been very, very positive.”

However, there is one area where those users have told Banham that there is clearly room for improvement. “Obviously, the goal is to go beyond the dongles, and they understand that,” he says. “Everyone would like to have it integrated into their devices.”

Banham and Haas know that Li-Fi will not be widely adopted until it is integrated into the chipsets of laptops, tablets, and smartphones. In the meantime, Banham points out that both Wi-Fi and Bluetooth also started out with dongles, and that it took nearly a decade for those technologies to move to chipsets and become widely adopted. “I think we'll be much faster,” he says.

It’s not clear whether PureLiFi would manufacture a Li-Fi chipset itself, or work with a partner to do so, and Haas couldn’t comment on the company’s plans. The company is actively working with other stakeholders to develop standards for LiFi that ensure consistent performance across devices, just like the standards that exist for Wi-Fi.

Haas says the world needs to put Li-Fi in its chipsets because there is a scarcity of bandwidth available for wireless devices, and pesky interference when too many devices operate at once. “It’s inevitable that we have to go to light as a wireless means of communication,” Haas says.

Ultimately, Haas doesn’t view Li-Fi as a replacement for other wireless technologies such as 5G for cellular networks or Wi-Fi. Instead, he thinks Li-Fi will work hand-in-hand with existing technologies to provide the most bandwidth to users, rather than compete with them.

He does hope the recent commercial pilots will help them overcome some common misconceptions about Li-Fi. Haas says many people still wonder if Li-Fi works in the presence of sunlight (it does) or think it will only work if the transmitter and receiver have a clear line of sight to one another. In reality, PureLiFi has found that reflections off of walls and floors deliver perfectly adequate data rates.

Haas has now spent the past 15 years working on Li-Fi, which began as a “curiosity-driven project” in his lab. He’s not at all surprised that he is still waiting to see it be integrated into mainstream products. “For me, it was always clear, it will not be an easy road,” he says.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More