Tech Talk iconTech Talk

null

Watch Heat Surge Across Semiconductors at the Speed of Sound

Using ultrafast electron microscopy, researchers at the University of Minnesota in Minneapolis have made the first videos of acoustic phonons—quantized mechanical waves that carry energy through materials—moving heat through semiconductor crystals. The  images show how defects in crystals of tungsten diselenide (WSe2) and germanium change the way enegy propagates through the material.

In the video, one of several published with their paper in Nature Communications, phonons arise and flow through the “macroscopically ordered but microscopically disordered” crystals as water flows through a rocky stream. Phonons typically traverse defects in 100 femtoseconds (100 x 10-15 s), making them challenging to catch in the act.

All along its path, the wave causes momentary elastic changes in the crystal structure. This, in turn, changes the way the material diffracts the stroboscopic bright-field electron stream, revealing the phonon’s progress. (Bright-field microscopy is the simplest technique, familiar from high-school biology: light shines up from below and passes through the specimen and up to the objective.)

"As soon as we saw the waves, we knew it was an extremely exciting observation," said lead researcher David Flannigan, an assistant professor of chemical engineering and materials science, in a university statement. "Actually watching this process happen at the nanoscale is a dream come true."

"In many applications, scientists and engineers want to understand thermal-energy motion, control it, collect it, and precisely guide it to do useful work or very quickly move it away from sensitive components," Flannigan said. "Because the lengths and times are so small and so fast, it has been very difficult to understand in detail how this occurs in materials that have imperfections, as essentially all materials do. Literally watching this process happen would go a very long way in building our understanding, and now we can do just that."

The researchers found that the phonons don’t start uniformly along the crystal’s edge, but rather begin at a smaller nucleating spot. The appearance of “coherent, propagating wavefronts” is “extremely sensitive to the shape of local strain fields…and vacuum-crystal interfaces”—in short, the behavior of the phonon reflects the crystal structure and directly reveals local thermal and electronic characteristics.

The University of Minnesota research is the latest showing in increasing detail how phonons carry heat and sound through condensed matter, suggesting how developers can induce, fine-tune, and test materials designed to order to transport heat and current.

Stretching a self-healing artificial muscle made by Zhenan Bao's team at Stanford.

A Super-Stretchy Self-Healing Artificial Muscle

When you pull a muscle, it may hurt like heck for a while, but the human body can heal. The same is not true of the electrically-responsive polymers used to make artificial muscles for haptic systems and experimental robots. When they get cut or punctured, it’s game over.

A new polymer that’s super stretchy and self-healing can act as a more resilient artificial muscle material. Created by a team led by Stanford University materials scientist Zhenan Bao, the polymer has an unusual combination of properties. A 2.5-centimeter sheet of the stuff can be stretched out to a length of 2.5 meters. When it’s punctured it fuses back together, something other self-healing materials don’t do well in ambient conditions.

Read More
null

New Full Duplex Radio Chip Transmits and Receives Wireless Signals at Once

A new wireless chip can perform a feat that could prove quite useful for the next generation of wireless technology: transmitting and receiving signals on the same frequency, at the same time with the help of a single antenna. This approach instantly doubles the data capacity of existing technology though is not yet capable of power levels necessary to operate on traditional mobile networks.

Last year, Harish Krishnaswamy, an electrical engineer at Columbia University demonstrated the ability to transmit and receive signals on the same frequency using two antennas in a full duplex radio that he built. Now, Negar Reiskarimian, a PhD student under Krishnaswamy, has embedded this technology on a chip that could eventually be used in smartphones and tablets. This time, the transmitter and receiver share a single antenna.

Read More
null

U.S. Leads Global Effort to Bring 1.5 Billion People Online by 2020

A global push to create more than a billion new Internet users over the next four years is underway, and leaders this week announced dozens of country-specific projects devoted to improving connectivity. India also officially signed on, joining more than 35 nations committed to expanding public Internet access and working with industry to build connections for rural users.

U.S. Secretary of State John Kerry led a meeting of global finance ministers, company executives and government representatives on Thursday in Washington D.C. to promote the U.S. State Department’s Global Connect Initiative, first announced last fall. The initiative has a stated goal of bringing 1.5 billion people online by 2020.

Kerry underscored the program’s ambition and mission by calling it “sort of the international equivalent of Franklin Roosevelt’s electrification program 80 years ago.”

Read More
null

SkinHaptics Uses Ultrasound to Generate Haptic Feedback Through Your Body

In the future that I'm planning on living in, nobody will carry around laptops or cell phones anymore. Instead, electronics will be embedded in wearables: in wristbands, in watches, in rings, in clothing, and eventually, in things like electronic temporary tattoos that you apply directly to your skin. The more embedded the technology gets, the tricker interaction with it can be, since you're no longer physically holding objects. At the University of Sussex, in England, researchers have developed a system called SkinHaptics that transmits ultrasound straight through your body to generate focused haptic feedback on the surface of your skin.

Read More
null

3 Ways To Bridge The Digital Divide

What will it take to bring the next billion people online? These days, the answer has as much to do with smart policy as with technical expertise. This week in Washington D.C., policy experts worked alongside engineers at a meeting (hosted in part by the IEEE Internet Initiative) intended to sketch a picture of what such a transition might look like around the world.

Companies such as Google and Facebook would like to know, and so would government leaders struck by the Internet’s power as an economic engine. More than half the world’s population, or about 4.2 billion people, do not have regular access to the Internet, according to the latest report published last fall by the U.N. Broadband Commission.

Last year, the U.S. State Department announced the Global Connect Initiative, which aims to bring 1.5 billion people online by 2020. As part of that effort, some of the ideas discussed this week will be presented on Thursday to financial ministers during a high-level meeting at the World Bank led by U.S. Secretary of State John Kerry.

Experts emphasized that there is no single technology or network structure that makes sense for every community. However, they offered a few good starting points for any country looking to bolster its number of internet users:

1. Permit unlicensed use of white space.

White space is a term for TV and radio frequencies that aren’t currently in use for existing channels. Originally, these extra frequencies were packed between channels as a sort of buffer in order to prevent interference. But companies have since found ways to operate on these channels without causing any interruption to neighboring programs.

Furthermore, a global transition to digital television and radio from analog has freed up an even broader swath of spectrum. Digital signals can transmit on adjacent channels without causing a disruption to either. Since rural areas tend to have access to fewer existing channels in the first place, they would have even more leftover spectrum.

New devices including smartphones, tablets, and computers that know how to detect unused spectrum can use it to transmit wireless broadband signals, also known as “WhiteFi” or “Super Wi-Fi.” These frequencies are especially useful because they can carry a lot of data over long distances and reach indoors. Tech companies including Google, Microsoft, Intel, Dell, and HP faced off against broadcasters to support early efforts to reuse white space for this purpose, and launched some of the first tests for new devices capable of doing it.

Now, enthusiasm for WhiteFi is picking up across the world. A national demonstration project in the United States conducted in public libraries has since spread to Finland, Malaysia, and the Philippines. Separately, Kenya has also experimented with it in two rural communities while Microsoft and Google recently led trials in South Africa. The Indian Institute of Technology has tested the technology in 13 villages and hopes to eventually serve many more.

2. Adopt a “dig once” mentality.

Whenever a company wants to install a new optical fiber cable to provide better Internet access to a house or community, it must first hire bulldozers and a construction crew to dig a path to the new destination. If multiple companies want to deploy fiber to the same area at different times, they might wind up digging the same route again.  

It’s easy to understand why this process is expensive and disruptive to locals. Experts at this week’s meeting say a much easier and cheaper approach would be for governments to require road construction crews to lay a single conduit alongside each new road as they are building it, through which all future fiber optic cables could be threaded. International development banks could do the same for the projects they fund. Experts stressed the value of these “dig once” policies; the U.S. Federal Highway Administration has said that this way of doing things can reduce the cost of deploying broadband by 90 percent.

This idea is gaining some traction, at least in the United States. The U.S. Departments of Commerce and Agriculture promoted it in a report published last fall. Around the same time, a lawmaker proposed a bill to implement it for all federal highway projects. However, the “dig once” policy is still not fully incorporated into federal, state, or local requirements and has yet to take hold elsewhere in the world.

3. Develop local content.

One of the most consistent ideas to emerge during this week’s meeting was that simply providing technical tools for Internet access isn’t sufficient. To welcome the next billion users, companies and technologists need to engage deeply with local communities to determine if and how they intend to use this access. That way, said the experts, networks can be built out in ways that best suit those purposes. In other words, responding to actual demand for the Internet is as important as devising new schemes to offer it.

One key part of that response is producing local content that is relevant to potential new users in their native languages. Many governments have begun to offer online services for employment, taxes, or licenses, which is one way to generate local content. Developers are also seeing success with local sites and apps that help people share with each other in a particular region. 

“You want to provide Internet access, but what do the end users really need?” Dilip Krishnaswamy, an IBM researcher based in Bangalore, India said. “Maybe they don’t care about the presidential election as much as they want to connect with each other.” India is a good example of the humongous potential demand for local material—it’s home to 1.2 billion people who speak 22 major languages.

All this new content must also be designed to work on devices that are available and popular in that area, rather than the latest smartphones used in Europe or the United States. During the meeting, experts at one table discussed obstacles to Internet use in Africa. They mentioned the ongoing challenge of simply charging devices in many parts of the continent. In response, someone tossed out the idea of hosting a hackathon devoted wholly to developing apps that consume as little power as possible.  

Editors note: This story was updated on April 15 to change “IEEE Internet Society” to “IEEE Internet Initiative.”

null

Software Rules Tax Preparation, But at What Cost?

It’s mid-April, which means it’s the end of tax season in America again, when those who haven’t yet filed their income taxes scramble to beat the impending deadline. This year, like every year, more of those filers will use software to help them prepare their taxes than ever before.

It’s been thirty years since the Internal Revenue Service began embracing technology in a big way: In 1986 the agency piloted a program for electronic filing. The initial project required an IRS employee to manually turn a modem on each time returns were received, and it could only process certain simple returns. From 25,000 returns in that pilot year, the program grew rapidly: to 4.2 million returns the first year the program went nationwide, in 1990; to 68 million in 2005, when electronic filing surpassed mailed returns; and to over 125 million last year, or more than 85% of all individual returns.

Today, computers are ubiquitous throughout the proccess of taxation. Since 2010, the IRS no longer mails out 1040 forms—even if you want to still fill out paper forms, the agency expects you to download and print them for yourself. 

The rise of electronic filing has been mirrored by the growing role and influence of tax prep software. In 2015, over 50 million people filed self-prepared electronic returns, accounting for 1 in 3 individual filings. While more taxpayers still rely on tax professionals, the balance continues to slowly shift toward software-assisted self-filing (in 2006, only 15% of returns were done that way).

In some ways, taxes are a natural domain for computer assistance. Tax legislation can mostly be modeled as a set of rules and criteria that apply under certain conditions. But the problem is that most tax codes were not written with automation in mind, so there’s a lot of work required to translate them into a technical specification. (As my colleague Robert Charette has noted, the Standard Federal Tax Reporter, which explains the U.S. tax code to accountants, has grown to over 70,000 pages). Not to mention the dozens of state and local tax regulations.

The upfront investment required to build a comprehensive abstraction layer on top of such large collection of requirements is a large barrier of entry to new competitors. That partially explains the success of Intuit’s TurboTax, which dominates the consumer market, processing more than twice as many returns as its nearest competitors, H&R Block and TaxAct, combined. Together, the three account for nearly 90% of returns filed electronically by individuals.

There are a number of reasons consumers choose software like TurboTax, with convenience and cost near the top of the list. (Disclosure: I’ve used TurboTax for many years, including this year). But not everything that’s good for TurboTax is good for its customers, and certainly not for the IRS.

For one thing, TurboTax has a vested interest in making sure the tax code stays complex or becomes even more complex over time. They have lobbied heavily against initiatives like California’s return-free-filing.

There’s also evidence that the sheer scale of TurboTax’s customer base has given them a wealth of valuable data, allowing the company to understand taxes as well as—and sometimes better—than the IRS. That came to light last year when TurboTax was forced to temporarily stop processing state returns after an unprecedented increase in fraudulent returns. A pair of whistleblowers claimed that TurboTax ignored its own internal fraud models, which were more reliable than those at the IRS. Similarly, I suspect that TurboTax has a large enough sample size of data to accurately reverse engineer IRS auditing risk models (which allows them to confidently offer audit protection for an additional fee).

Finally, there’s a danger to filers dependent on tax-preparation software: The more we rely on software like TurboTax, the more we risk falling into the complacency of the automation paradox, where we no longer know enough about how taxes work to hold it accountable or do our own sanity checks. Maybe we would be better off with a simpler underlying protocol than a user-friendly abstraction layer.

In any case, best of luck to those of you who have yet to file!

null

Bringing Augmented Reality to Real Eyeglasses

If reality isn’t cutting it for you, just hold on; engineers are working on augmenting it. At least, they hope to show you more than what would normally be before your eyes, by adding systems to ordinary eyeglasses that would display images and data to enhance your experience.

Read More
Hbar Technologies concept of an antimatter-catalyzed nuclear pulse propulsion starship.

Antimatter Starship Scheme Coming to Kickstarter

A spaceship departs Earth on a one-way, 42-year trip to Alpha Centauri. It runs on an antimatter engine that blasts the ship out of the solar system at one-tenth the speed of light. This is not the premise for a new Ridley Scott sci-fi drama but rather the endgame of a crowdfunded spaceship project launching this month.

West Chicago, Ill.-based Hbar Technologies plans a Kickstarter effort to raise US $200,000 for the next phase design of an antimatter-propelled spaceship. The two scientists behind this design effort are a veteran Fermilab particle accelerator scientist and a former Los Alamos National Laboratory physicist and founding director of the U.S. Center for Space Nuclear Research. They originally developed it for NASA at the turn of the millennium.

Because of budget cutbacks, the U.S. space agency dropped the antimatter-driven spaceship project in 2004. But the scientists say the plans they’re developing are technically feasible—if admittedly still quite optimistic in terms of the breakthroughs needed to enable antimatter to be stored in a fuel tank.

Read More
A smart phone with Higher Ground's satellite attachment Satpaq on the back.

Could Satellite Messaging Startup Higher Ground Bring Down the 911 System?

Higher Ground might be the only Silicon Valley startup promising not to disrupt its entire industry. This small satellite messaging business is battling claims by telecoms companies that its SatPaq device could interfere with their services, interrupt life-saving emergency calls and even cause major outages across the United States.

IEEE Spectrum can reveal that for the past several years, Higher Ground has been quietly developing SatPaq, a smartphone case with a flip-up antenna that communicates with geostationary satellites. Connecting to a smartphone messaging app via Bluetooth, SatPaq can send and receive text messages and email almost anywhere in the United States, including the wilds of Alaska.

The problem is that SatPaq works in the same C-band microwave frequencies used by CenturyLink and other companies for voice and data communications in rural areas and as part of their national networks. They fear that the widespread use of SatPaqs could result in catastrophic interference.

“[This] is not just potential interference to one or two specific links in a particular location but… potential interference to each and every such link of the network throughout the country,” wrote CenturyLink in a submission to the Federal Communications Commission (FCC). “This seems to be a recipe for disaster.”

Read More
Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More