Tech Talk iconTech Talk


Ultrasonic Signals Transmit Data Through Meat at HD Video Quality

Modern medical implants are hobbled by slow download speeds. Most medical devices transmit data over radio frequencies at the relatively glacial pace of fewer than 50 kilobytes per second. At that rate, it can take ages for physicians to transfer data or reprogram devices.

New evidence suggests ultrasonic signals could speed up that process. Researchers who achieved higher data rates in one early test say that these signals may eventually enable doctors to live stream high-quality video from inside of a patient. 

A team led by Andrew Singer, an electrical engineer at the University of Illinois at Urbana Champaign, found that they could use ultrasonic signals to transfer data through meat at speeds up to 30 megabits per second. That’s the highest ever recorded data rate for signals transmitted through animal tissue, Singer says. For comparison, streaming Netflix in Ultra HD requires 25 megabits per second.

Most implants installed today broadcast on radio frequencies. But those signals must remain below a maximum bandwidth of 300 kilohertz. More powerful signals might cause the resulting electromagnetic waves to interfere with nearby devices. They could also harm patients, since the body absorbs some waves as the signals pass through. Or as Singer puts it, “We are a bag of salt water, and electromagnetic waves heat up salt water.”  

In addition to being dangerous, this absorption also makes data transfer quite inefficient. Singer wanted to use ultrasonic signals to transmit data at higher rates and keep more of it intact along the way.

Before he could begin his experiment, Singer had to pick up some pork loin and beef liver from County Market in Champaign, Ill. He threw some pig knuckles in the shopping bag, too because he thought it might be neat to try to transmit wireless signals through bone and gristle.  

Back at his lab, colleagues rejected the pig knuckles. But the remaining slabs were carefully strung, one at a time, between two 5-megahertz transducers in a tank filled with water. One transducer converted binary digital data to ultrasonic signals that were transmitted through the meat. Its twin measured the resulting data rates.

Singer was happy to see that, in addition to the signal traveling fast, most of the original data arrived intact, with little lost during its journey through the meat.  “We were a little bit surprised that the meat really did not provide much in terms of additional dispersion or attenuation,” Singer says. “The signals coupled extremely well and we had almost the entire bandwidth available to us for data transmission.”

Jeremy Dahl, a radiologist at Stanford University who focuses on ultrasonic devices, points out that the transducers in Singer’s study were positioned only 5.86 centimeters apart, with the meat in the middle. Therefore, it’s not clear that medical devices embedded deep in the body could achieve similar rates while transmitting to, for example, a physician’s kiosk across the room.

And since they didn’t test the pig knuckles, Singer’s group still doesn’t know how ultrasonic signals will stack up as a method of data transmission when when signals have to travel through bone or skin.

Another potential issue is that ultrasonic signals travel in a beam instead of broadcasting in all directions at once. A physician would have to know how a device is oriented within a patient’s body in order to catch the signal it emits.

“If you’re trying to receive from a different location from where that beam is directed, you’re not going to receive any signal,” Dahl says. Radio frequency is omnidirectional, so an external device can capture it from any direction.

Next, Singer hopes to pursue animal studies. In the future, he thinks higher data rates achieved through ultrasonic signals could allow physicians to ask a patient to swallow a camera and watch a live stream as it passes through the digestive tract.

Or, it might enable practical functions that have so far eluded the medical device industry, such as making software upgrades to devices that are already in place. “At today's rates for communicating with implantable devices, you'd never even consider the possibility to change the software on these things,” Singer says.  

They’re Alive! Vintage Computer Fans Keep the Great Machines of the Past Running

A monochrome glow spilled out into the room, produced in the old fashioned way: by hurling electrons at a phosphorescent screen. The high-pitched rasp of a dot-matrix printer pierced the air. For a second I was back in the 1980s, the 8-bit age when computers stopped being things that people only saw in movies and magazines and started cluttering up their homes. Then someone jostled against me and I returned to the present and the crowded exhibition hall of the Vintage Computer Festival East (VCF East).

The festival took place 15-17 April at the InfoAge Science Center in Wall, New Jersey. The center itself has an interesting place in technological history, stretching back to its origins as part of Marconi’s radio empire and including decades as a top secret communications research facility for the military. An 18-meter radio dish that was used as the ground station for the pioneering Tiros weather satellite, launched in 1960, is being restored to full operation at the site. 

The InfoAge center is home to a permanent collection of vintage computers, covering the years from 1945 to 1986, but it’s also home to the annual festival where enthusiasts gather to exhibit their personal collections of vintage computers and related items. Most of the machines still function, sometimes only thanks to heroic restoration efforts.

On display at this year’s festival was a working Apple 1, a rarity easily worth hundreds of thousands of dollars. It had been painstakingly restored for the owner by exhibitor Corey Cohen, who is now often employed by auction houses looking to verify the authenticity of such machines. My favorite moment was when he loaded a computer program into the Apple via the original cassette tape interface—with a sound file on his iPhone standing in for the cassette player.

Going back further in time, Brian Stuart demoed his emulator of the fabled and immensely influential World War II–era ENIAC computer. Stuart’s emulator not only reproduces most of the internal workings of the behemoth machines on a PC, but he’s taken the time to recreate the panel displays from old photographs so that they light up exactly as they would have done when the real machine was running. When I arrived, Bill Mauchly, son of ENIAC co-creator John Mauchly, was looking over the emulator with obvious delight. Mauchly pointed out that one of the original programmers seen tending to the giant machine in a photograph was his mother, Kathleen Kay McNulty, whom Mauchly senior had married in 1948. “ENIAC is sort of like my step-brother,” he joked.

Other displays included things like a collection of Apple II clones from around the world, including a fascinating Bulgarian machine that also housed a Z80 processor in addition to the Apple’s standard 6502 CPU. The user can switch between processors, allowing them to run a much wider range of software than either CPU alone. Another switch lets the machine’s display alternate between the Roman and Cyrillic alphabets (all programming had to be done using the Roman alphabet).

Speakers at the conference included John Blankenbaker, creator of the Kenbak-1, a little known non-microprocessor-based educational machine that has a good claim to being the first commercial personal computer. Ted Nelson, the man who coined the words hypertext and hypermedia (among other contributions to our modern digital lexicon), walked attendees through some of his alternative vision for what computing could be. Nelson’s original system design for hypertext, called Xanadu, included both “jump links”—now known as the hyperlinks that glue the Web together—and a system for visually presenting relationships between documents. Said Nelson:  “The World Wide Web is a fork of Xanadu,” one that kept the jump links but left out what Nelson considers the most important part: being able to visualize the connections between documents. He’s still working on a prototype of the full system, but as he nears his 80th birthday he ruefully admits, “all my plans involve being younger.” (Look out for the video of IEEE Spectrum’s interview with Nelson soon).

Evan Koblentz, the author of Abacus to Smartphone: The Evolution of Mobile Computers and president of the Vintage Computer Federation, a non-profit umbrella organization to a number of festivals, explains that one of his goals is to build bridges between historians of computer science and the enthusiasts and collectors who keep and tend early machines. “I think that [academic] researchers need to get their hands dirty, and hobbysists need to understand that research isn’t just looking things up on Wikipedia.”

If you missed this year’s show in New Jersey, you still have several chances to revisit the vintage world of computing. VCF Europa takes place in Munich from 30 April to 1 May, and VCF West will take place at the Computer History Museum in Silicon Valley from 6-7 August.  

Video produced by Kristen Clark.


Mobile Forensics CEO Proposes Controversial Access Tech for Smartphones

The FBI may have unlocked the iPhone 5C held by a San Bernardino shooter without Apple’s help, but the agency and the world’s largest tech company are still at odds over whether law enforcement should be granted access into the smartphones of suspects and criminals.

On Tuesday, a U.S. House of Representatives subcommittee will hear arguments from Apple and the FBI on how best to weigh the privacy and security of citizens in such cases. Amid the dispute, the CEO of a mobile forensics company has proposed a controversial “backdoor” solution based in public key cryptography that he says represents the best possible compromise between the two.

However, several cybersecurity and computer science experts interviewed by IEEE Spectrum disagree, saying that this type of access creates vulnerabilities and is of limited value to law enforcement.

Read More

Watch Heat Surge Across Semiconductors at the Speed of Sound

Using ultrafast electron microscopy, researchers at the University of Minnesota in Minneapolis have made the first videos of acoustic phonons—quantized mechanical waves that carry energy through materials—moving heat through semiconductor crystals. The  images show how defects in crystals of tungsten diselenide (WSe2) and germanium change the way enegy propagates through the material.

In the video, one of several published with their paper in Nature Communications, phonons arise and flow through the “macroscopically ordered but microscopically disordered” crystals as water flows through a rocky stream. Phonons typically traverse defects in 100 femtoseconds (100 x 10-15 s), making them challenging to catch in the act.

All along its path, the wave causes momentary elastic changes in the crystal structure. This, in turn, changes the way the material diffracts the stroboscopic bright-field electron stream, revealing the phonon’s progress. (Bright-field microscopy is the simplest technique, familiar from high-school biology: light shines up from below and passes through the specimen and up to the objective.)

"As soon as we saw the waves, we knew it was an extremely exciting observation," said lead researcher David Flannigan, an assistant professor of chemical engineering and materials science, in a university statement. "Actually watching this process happen at the nanoscale is a dream come true."

"In many applications, scientists and engineers want to understand thermal-energy motion, control it, collect it, and precisely guide it to do useful work or very quickly move it away from sensitive components," Flannigan said. "Because the lengths and times are so small and so fast, it has been very difficult to understand in detail how this occurs in materials that have imperfections, as essentially all materials do. Literally watching this process happen would go a very long way in building our understanding, and now we can do just that."

The researchers found that the phonons don’t start uniformly along the crystal’s edge, but rather begin at a smaller nucleating spot. The appearance of “coherent, propagating wavefronts” is “extremely sensitive to the shape of local strain fields…and vacuum-crystal interfaces”—in short, the behavior of the phonon reflects the crystal structure and directly reveals local thermal and electronic characteristics.

The University of Minnesota research is the latest showing in increasing detail how phonons carry heat and sound through condensed matter, suggesting how developers can induce, fine-tune, and test materials designed to order to transport heat and current.

Stretching a self-healing artificial muscle made by Zhenan Bao's team at Stanford.

A Super-Stretchy Self-Healing Artificial Muscle

When you pull a muscle, it may hurt like heck for a while, but the human body can heal. The same is not true of the electrically-responsive polymers used to make artificial muscles for haptic systems and experimental robots. When they get cut or punctured, it’s game over.

A new polymer that’s super stretchy and self-healing can act as a more resilient artificial muscle material. Created by a team led by Stanford University materials scientist Zhenan Bao, the polymer has an unusual combination of properties. A 2.5-centimeter sheet of the stuff can be stretched out to a length of 2.5 meters. When it’s punctured it fuses back together, something other self-healing materials don’t do well in ambient conditions.

Read More

New Full Duplex Radio Chip Transmits and Receives Wireless Signals at Once

A new wireless chip can perform a feat that could prove quite useful for the next generation of wireless technology: transmitting and receiving signals on the same frequency, at the same time with the help of a single antenna. This approach instantly doubles the data capacity of existing technology though is not yet capable of power levels necessary to operate on traditional mobile networks.

Last year, Harish Krishnaswamy, an electrical engineer at Columbia University demonstrated the ability to transmit and receive signals on the same frequency using two antennas in a full duplex radio that he built. Now, Negar Reiskarimian, a PhD student under Krishnaswamy, has embedded this technology on a chip that could eventually be used in smartphones and tablets. This time, the transmitter and receiver share a single antenna.

Read More

U.S. Leads Global Effort to Bring 1.5 Billion People Online by 2020

A global push to create more than a billion new Internet users over the next four years is underway, and leaders this week announced dozens of country-specific projects devoted to improving connectivity. India also officially signed on, joining more than 35 nations committed to expanding public Internet access and working with industry to build connections for rural users.

U.S. Secretary of State John Kerry led a meeting of global finance ministers, company executives and government representatives on Thursday in Washington D.C. to promote the U.S. State Department’s Global Connect Initiative, first announced last fall. The initiative has a stated goal of bringing 1.5 billion people online by 2020.

Kerry underscored the program’s ambition and mission by calling it “sort of the international equivalent of Franklin Roosevelt’s electrification program 80 years ago.”

Read More

SkinHaptics Uses Ultrasound to Generate Haptic Feedback Through Your Body

In the future that I'm planning on living in, nobody will carry around laptops or cell phones anymore. Instead, electronics will be embedded in wearables: in wristbands, in watches, in rings, in clothing, and eventually, in things like electronic temporary tattoos that you apply directly to your skin. The more embedded the technology gets, the tricker interaction with it can be, since you're no longer physically holding objects. At the University of Sussex, in England, researchers have developed a system called SkinHaptics that transmits ultrasound straight through your body to generate focused haptic feedback on the surface of your skin.

Read More

3 Ways To Bridge The Digital Divide

What will it take to bring the next billion people online? These days, the answer has as much to do with smart policy as with technical expertise. This week in Washington D.C., policy experts worked alongside engineers at a meeting (hosted in part by the IEEE Internet Initiative) intended to sketch a picture of what such a transition might look like around the world.

Companies such as Google and Facebook would like to know, and so would government leaders struck by the Internet’s power as an economic engine. More than half the world’s population, or about 4.2 billion people, do not have regular access to the Internet, according to the latest report published last fall by the U.N. Broadband Commission.

Last year, the U.S. State Department announced the Global Connect Initiative, which aims to bring 1.5 billion people online by 2020. As part of that effort, some of the ideas discussed this week will be presented on Thursday to financial ministers during a high-level meeting at the World Bank led by U.S. Secretary of State John Kerry.

Experts emphasized that there is no single technology or network structure that makes sense for every community. However, they offered a few good starting points for any country looking to bolster its number of internet users:

1. Permit unlicensed use of white space.

White space is a term for TV and radio frequencies that aren’t currently in use for existing channels. Originally, these extra frequencies were packed between channels as a sort of buffer in order to prevent interference. But companies have since found ways to operate on these channels without causing any interruption to neighboring programs.

Furthermore, a global transition to digital television and radio from analog has freed up an even broader swath of spectrum. Digital signals can transmit on adjacent channels without causing a disruption to either. Since rural areas tend to have access to fewer existing channels in the first place, they would have even more leftover spectrum.

New devices including smartphones, tablets, and computers that know how to detect unused spectrum can use it to transmit wireless broadband signals, also known as “WhiteFi” or “Super Wi-Fi.” These frequencies are especially useful because they can carry a lot of data over long distances and reach indoors. Tech companies including Google, Microsoft, Intel, Dell, and HP faced off against broadcasters to support early efforts to reuse white space for this purpose, and launched some of the first tests for new devices capable of doing it.

Now, enthusiasm for WhiteFi is picking up across the world. A national demonstration project in the United States conducted in public libraries has since spread to Finland, Malaysia, and the Philippines. Separately, Kenya has also experimented with it in two rural communities while Microsoft and Google recently led trials in South Africa. The Indian Institute of Technology has tested the technology in 13 villages and hopes to eventually serve many more.

2. Adopt a “dig once” mentality.

Whenever a company wants to install a new optical fiber cable to provide better Internet access to a house or community, it must first hire bulldozers and a construction crew to dig a path to the new destination. If multiple companies want to deploy fiber to the same area at different times, they might wind up digging the same route again.  

It’s easy to understand why this process is expensive and disruptive to locals. Experts at this week’s meeting say a much easier and cheaper approach would be for governments to require road construction crews to lay a single conduit alongside each new road as they are building it, through which all future fiber optic cables could be threaded. International development banks could do the same for the projects they fund. Experts stressed the value of these “dig once” policies; the U.S. Federal Highway Administration has said that this way of doing things can reduce the cost of deploying broadband by 90 percent.

This idea is gaining some traction, at least in the United States. The U.S. Departments of Commerce and Agriculture promoted it in a report published last fall. Around the same time, a lawmaker proposed a bill to implement it for all federal highway projects. However, the “dig once” policy is still not fully incorporated into federal, state, or local requirements and has yet to take hold elsewhere in the world.

3. Develop local content.

One of the most consistent ideas to emerge during this week’s meeting was that simply providing technical tools for Internet access isn’t sufficient. To welcome the next billion users, companies and technologists need to engage deeply with local communities to determine if and how they intend to use this access. That way, said the experts, networks can be built out in ways that best suit those purposes. In other words, responding to actual demand for the Internet is as important as devising new schemes to offer it.

One key part of that response is producing local content that is relevant to potential new users in their native languages. Many governments have begun to offer online services for employment, taxes, or licenses, which is one way to generate local content. Developers are also seeing success with local sites and apps that help people share with each other in a particular region. 

“You want to provide Internet access, but what do the end users really need?” Dilip Krishnaswamy, an IBM researcher based in Bangalore, India said. “Maybe they don’t care about the presidential election as much as they want to connect with each other.” India is a good example of the humongous potential demand for local material—it’s home to 1.2 billion people who speak 22 major languages.

All this new content must also be designed to work on devices that are available and popular in that area, rather than the latest smartphones used in Europe or the United States. During the meeting, experts at one table discussed obstacles to Internet use in Africa. They mentioned the ongoing challenge of simply charging devices in many parts of the continent. In response, someone tossed out the idea of hosting a hackathon devoted wholly to developing apps that consume as little power as possible.  

Editors note: This story was updated on April 15 to change “IEEE Internet Society” to “IEEE Internet Initiative.”


Software Rules Tax Preparation, But at What Cost?

It’s mid-April, which means it’s the end of tax season in America again, when those who haven’t yet filed their income taxes scramble to beat the impending deadline. This year, like every year, more of those filers will use software to help them prepare their taxes than ever before.

It’s been thirty years since the Internal Revenue Service began embracing technology in a big way: In 1986 the agency piloted a program for electronic filing. The initial project required an IRS employee to manually turn a modem on each time returns were received, and it could only process certain simple returns. From 25,000 returns in that pilot year, the program grew rapidly: to 4.2 million returns the first year the program went nationwide, in 1990; to 68 million in 2005, when electronic filing surpassed mailed returns; and to over 125 million last year, or more than 85% of all individual returns.

Today, computers are ubiquitous throughout the proccess of taxation. Since 2010, the IRS no longer mails out 1040 forms—even if you want to still fill out paper forms, the agency expects you to download and print them for yourself. 

The rise of electronic filing has been mirrored by the growing role and influence of tax prep software. In 2015, over 50 million people filed self-prepared electronic returns, accounting for 1 in 3 individual filings. While more taxpayers still rely on tax professionals, the balance continues to slowly shift toward software-assisted self-filing (in 2006, only 15% of returns were done that way).

In some ways, taxes are a natural domain for computer assistance. Tax legislation can mostly be modeled as a set of rules and criteria that apply under certain conditions. But the problem is that most tax codes were not written with automation in mind, so there’s a lot of work required to translate them into a technical specification. (As my colleague Robert Charette has noted, the Standard Federal Tax Reporter, which explains the U.S. tax code to accountants, has grown to over 70,000 pages). Not to mention the dozens of state and local tax regulations.

The upfront investment required to build a comprehensive abstraction layer on top of such large collection of requirements is a large barrier of entry to new competitors. That partially explains the success of Intuit’s TurboTax, which dominates the consumer market, processing more than twice as many returns as its nearest competitors, H&R Block and TaxAct, combined. Together, the three account for nearly 90% of returns filed electronically by individuals.

There are a number of reasons consumers choose software like TurboTax, with convenience and cost near the top of the list. (Disclosure: I’ve used TurboTax for many years, including this year). But not everything that’s good for TurboTax is good for its customers, and certainly not for the IRS.

For one thing, TurboTax has a vested interest in making sure the tax code stays complex or becomes even more complex over time. They have lobbied heavily against initiatives like California’s return-free-filing.

There’s also evidence that the sheer scale of TurboTax’s customer base has given them a wealth of valuable data, allowing the company to understand taxes as well as—and sometimes better—than the IRS. That came to light last year when TurboTax was forced to temporarily stop processing state returns after an unprecedented increase in fraudulent returns. A pair of whistleblowers claimed that TurboTax ignored its own internal fraud models, which were more reliable than those at the IRS. Similarly, I suspect that TurboTax has a large enough sample size of data to accurately reverse engineer IRS auditing risk models (which allows them to confidently offer audit protection for an additional fee).

Finally, there’s a danger to filers dependent on tax-preparation software: The more we rely on software like TurboTax, the more we risk falling into the complacency of the automation paradox, where we no longer know enough about how taxes work to hold it accountable or do our own sanity checks. Maybe we would be better off with a simpler underlying protocol than a user-friendly abstraction layer.

In any case, best of luck to those of you who have yet to file!


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More