Tech Talk iconTech Talk

5 Things You Missed This Week at IEEE Spectrum: Nanorods for Li-Fi Displays, Health Apps Could Make People Sicker, and More

1. Nanorods Emit and Detect Light, Could Lead to Displays That Communicate via Li-Fi

In recent years, the hot application for quantum dots has been as a replacement for light-emitting diodes (LEDs) as a backlight source for liquid crystal displays. But now, an international team of researchers has produced engineered nanorods that each feature a quantum dot capable of emitting and absorbing visible light. With this advance, quantum dots could someday yield mobile phones that can “see” without the need of a camera lens or communicate with each other using Light Fidelity (Li-Fi) technology.

 

2. Could Mobile Health Apps and Wearables Actually Make People Sicker?

A recent opinion piece about wearable tech for infants pulls no punches: “There is no evidence that consumer infant physiologic monitors are life-saving, and there is potential for harm if parents choose to use them.” That wasn’t just any random person’s judgement. The article was published in the Journal of the American Medical Association and was authored by two pediatricians and an expert from the ECRI Institute, a nonprofit organization dedicated to the rigorous evaluation of medical procedures and devices. 

 

3. Medtronic's CardioInsight Electrode Vest Maps Heart's Electrical System

The 252-electrode device could help doctors pinpoint the locations of electrical malfunctions in the heart that cause irregular heartbeats.

 

4. New Terahertz Transmitter Shines With Ultra-Fast Data Speeds

The tiny CMOS-based transmitter can send data packets wirelessly at rates as high as 105 gigabits per second.

 

5. Millimeter-Scale Computers: Now With Deep Learning Neural Networks on Board

University of Michigan micro-mote computers—tiny, energy efficient computing sensors that can do analysis on board—aim to make the Internet of Things smarter without consuming more power.

A millimeter-scale computer looks like a stack of chips

Millimeter-Scale Computers: Now With Deep-Learning Neural Networks on Board

Computer scientist David Blaauw pulls a small plastic box from his bag. He carefully uses his fingernail to pick up the tiny black speck inside and place it on the hotel café table. At 1 cubic millimeter, this is one of a line of the world’s smallest computers. I had to be careful not to cough or sneeze lest it blow away and be swept into the trash.

Blaauw and his colleague Dennis Sylvester, both IEEE Fellows and computer scientists at the University of Michigan, were in San Francisco this week to present 10 papers related to these “micromote” computers at the IEEE International Solid-State Circuits Conference (ISSCC). They’ve been presenting different variations on the tiny devices for a few years.

Their broader goal is to make smarter, smaller sensors for medical devices and the Internet of Things—sensors that can do more with less energy. Many of the microphones, cameras, and other sensors that make up the eyes and ears of smart devices are always on alert, and frequently beam personal data into the cloud because they can’t analyze it themselves. Some have predicted that by 2035, there will be 1 trillion such devices. “If you’ve got a trillion devices producing readings constantly, we’re going to drown in data,” says Blaauw. By developing tiny, energy-efficient computing sensors that can do analysis on board, Blaauw and Sylvester hope to make these devices more secure, while also saving energy.

Read More
A tiny terahertz transmitter is mounted under a microscope in a lab at Hiroshima University.

New Terahertz Transmitter Shines With Ultrafast Data Speeds

This week, researchers at Hiroshima University showed off a new terahertz transmitter that is just as powerful as its predecessors, but should ultimately prove more affordable for commercial applications. In a demo at the International Solid-State Circuits Conference in San Francisco, they presented a device capable of delivering data at breathtaking speeds of more than 100 gigabits per second at a frequency of 300 gigahertz.

At its very best, the transmitter can shuttle data at 105 Gb/s, which is 2,100 times faster than the peak cellular speeds of 50 megabits per second available through LTE. After a successful demo, the transmitter could find its way into future wireless applications that require low latency and high bandwidth.

Though other transmitters have achieved speedy data rates in the terahertz range before, the group says theirs is the first to also be based on a CMOS integrated circuit, which means it’s potentially more viable for commercial base stations or devices.

“This is quite a step for this kind of technology, because it relies on something that is freely available and could be easily implemented, compared to all of the other techniques,” says Riccardo DegI’Innocenti, a researcher at the University of Cambridge who was not involved in the work.

Terahertz waves are shorter in length and are broadcast at much higher frequencies than the microwaves used today for smartphones, household devices, or military radar. For example, Wi-Fi devices emit waves that measure about 12 centimeters in length at a frequency of 2.4 GHz. Waves in the terahertz range span less than 1 millimeter and start at 100 GHz.

Other teams have demonstrated competing terahertz transmitters that deliver data at speeds even faster than those shown by the Hiroshima group. However, these systems often relied on technology that was bulky or which could not easily scale.

In contrast, the new transmitter has a 2-by-3-mm footprint, and was created using a 40-nanometer CMOS process. “There are many ways also to build a terahertz wireless system,” says DegI’Innocenti. “However, this is still progress because the CMOS technology was sort of lagging behind.”

Minoru Fujishima, a professor at Hiroshima University and a member of the team that developed the transmitter, says the primary advantage of fabricating the device with CMOS is that it will allow manufacturers to sell it at a competitive price if it is commercialized. However, the first run was still rather expensive. The tiny transmitter he demonstrated cost US $100,000 to build.

Fujishima’s group hopes their transmitter can be used in satellite communications, or to set up a wireless link between cellular base stations. “I think that is a very promising application because space cannot be linked by fiber optics,” he says.

Elsewhere, companies and researchers have developed extra-sensitive receivers to reliably detect terahertz waves, which are quickly absorbed as they travel through the atmosphere.

Thomas Küerner, who has worked at TU Braunschweig in Germany on projects in which terahertz transmitters have been developed, calls the new research “quite a milestone.” Alongside Iwao Hosako, who is a coauthor with Fujishima, Küerner is leading the IEEE 802.15 Task Group 3d; the group’s mission is to develop a standard for devices that will operate in the 300-GHz band.

Küerner says the task group is considering four primary applications for 300-GHz devices. One is as a replacement for the wires inside devices with high-speed terahertz links that can send data from one part of the device to another. The second is using terahertz waves to enable the creation of wireless kiosks in retail stores that will let customers instantly download films to their devices instead of having to take a DVD home with them. The third, says Küerner, is to create wireless connections for data centers that can replace fiber optic cables. And the final application is to use terahertz waves for fronthaul or backhaul in cellular networks.

A self-destruct mechanism based on an expanding polymer layer can destroy a silicon chip within 10 seconds

Self-Destructing Gadgets Made Not So Mission Impossible

Self-destruct options from the Mission: Impossible movies could become a reality for even the most common smartphones and laptops used by government officials or corporate employees. A new self-destruct mechanism can destroy electronics within 10 seconds through wireless commands or the triggering of certain sensors.

Read More
Can a Bitcoin-enabled browser be the publishing industry's savior?

Can Brave's Bitcoin Payment Platform Save Online Publishing?

Last year, Brendan Eich, former CEO of the Mozilla corporation and designer the Javascript programming language, launched Brave, a Web browser that blocks advertisements by default. Now Eich is rolling out a new Bitcoin payment platform, integrated right into the browser, that he hopes will provide an alternative revenue stream for publishers. He views it as a replacement the one Brave takes away, which he argues is dysfunctional and on the verge of collapse.

As of September, people using Brave have the option of creating a wallet in the browser, loading it with bitcoins, and sending small payments to publishers based on the anonymized metering of their Web traffic. For now, Brave plays a central role in facilitating the transactions, although it has sought to do so in a way that protects the privacy of Brave users.

When you create a wallet with Brave, you actually share it with a company called BitGo, meaning that you and BitGo each own one key for the wallet, both of which need to be present in order for a payment to go through. After loading bitcoins into this wallet, you specify the total amount of money you would like to spend on your Web browsing. Then, after a month goes by (measured by the days you actually spend using the Brave browser), bitcoin transactions signed by both you and BitGo trigger the disbursement of that money into a Brave settlement wallet.

Before a website operator can collect the funds, it must go through a verification process with Brave to prove that it’s running a legitimate business. In return for providing this service, Brave takes five percent of all the donations that come through.

Read More
Job training and MOOCs

Why Your Next Job Training Course May Be a MOOC

This is part of a series on MOOC and online learning

Over the past two decades, the great Internet wave that swept through industry and revolutionized everything in its wake—including manufacturing, product development, supply-chain management, marketing, financial transactions, and customer service—likewise transformed on-the-job training. Companies eager to cut costs saw the overwhelming economic advantage of online instruction over the conventional classroom, and so they shuttered lavish country-club-style training parks and canceled employee travel to professional development courses in exotic locales. These days, most workers tend to receive their training at their desks, the better to maintain productivity.

Web instruction has also helped companies expand internationally because they can easily circulate self-learning modules to a geographically dispersed labor force at relatively low cost. As Australian scholar Paul Nicholson observed, “E-learning in business and training [is] driven by notions of improved productivity and cost reduction, especially in an increasingly globalized business environment.”

Over the past decade, employee enrollment in online programs has grown 20 times faster than has student enrollment at traditional colleges and universities. By 2020, 60 percent of workers receiving tuition reimbursement will be enrolled in online programs, according to EdAssist, a corporate tuition-assistance consulting firm.

Yet despite the corporate romance with online training for employees, companies have had a more troubled relationship with the virtual education offered by colleges and universities. When digital university programs first became available in the mid-1990s, many companies simply ignored them, refusing to provide tuition assistance to employees who enrolled in digital degree programs. Later, when it became apparent that some of the nation’s most selective schools actually offered high-quality online master’s degrees, especially in fields that paralleled industry needs, businesses grew more accepting. 

To be sure, not every program offered a high-quality education, and a number of companies unwittingly allowed their employees to enroll in for-profit online schools that turned out to be scams. “For a time, companies were not as serious about vetting universities as they are today,” says Allan Weisberg, former chief learning officer at Johnson & Johnson. “When we finally looked into some for-profits, we discovered they were scams, and turned them down.”

A number of Fortune 500 companies responded by setting stricter rules on their tuition-reimbursement programs to prevent unsuspecting employees from throwing away money—the company’s as well as their own—on discredited programs at for-profits and other substandard schools. Other companies sensibly steered their workers toward approved universities, which must be ABET-accredited, perform serious research that parallels the firm’s own research interests, and employ significant numbers of the school’s own alumni. “Today, wise companies invest their tuition dollars in established non-profit and public schools,” says Weisberg. “With stricter polices, companies want to make sure that tuition assistance is valuable to all parties—employees, corporations, and universities.”

Ideally, online training should give personnel the chance to acquire new and valuable skills, perhaps in emerging fields like cyber security or data science. Such training helps the company, of course, and it also gives workers an edge in a tricky economy. Earning a degree online is also a huge convenience for workers, whose days are already filled as it is. A mid-career engineer with job, family, and travel responsibilities can more easily study online at his or her own pace—at 10 at night after the kids are in bed—than commute to campus.

Given that the switch to online job training was largely a cost-cutting move, it’s only natural that when MOOCs—massive open online courses—came on the scene in 2011, companies were curious. Because they’re designed to reach hundreds or thousands of students at once, MOOCs benefit from economies of scale that smaller online programs don’t share.

Google and Instagram are experimenting with MOOC provider Coursera’s “Specializations,” which are groups of related courses in key areas of interest to industry. The fee for a Coursera Specialization runs from $150 to $500 for anywhere from three to ten courses, plus a capstone project. The most popular offerings include data science (from Johns Hopkins University), Python (from the University of Michigan), and machine learning (from the University of Washington). Compared to the thousands of dollars for a more conventional training program, MOOCs are a relative bargain. And if a company’s aim is for workers to quickly acquire in-demand skills, rather than earning an accredited degree that may take a year or more to complete, a set of focused MOOCs may be the way to go. This skills-centered approach, known in education circles as competency-based education, is a growing trend at U.S. schools.

But before companies jump on the MOOC bandwagon, they might consider whether their ideal employee is someone with up-to-date skills in a narrow specialty, or a truly thoughtful professional who is prepared to go beyond his or her defined tasks and can adapt flexibly to new conditions and new markets. Ultimately, industry must decide who will fill the labor pipeline: an army of MOOC-trained workers or deeply talented personnel who’ve earned richly complex degrees from the nation’s best universities.

About the Author:

Robert Ubell is Vice Dean Emeritus of Online Learning at NYU’s Tandon School of Engineering. A collection of his essays on digital education, Going Online: Perspectives on Digital Learning, was recently published by Routledge. He can be reached at bobubell@gmail.comThis is the last in a series on MOOCs and online learning.

Sanyogita Shamsunder, Verizon's director of network planning, is shown standing outside near equipment used to test new base station technology.

Profile: Sanyogita Shamsunder, the “Problem Solver” Behind Verizon’s 5G Network

Ask anyone in telecom and they’ll tell you that Verizon has been the most aggressive of any U.S. company in forging ahead on 5G, the highly anticipated wireless network of the future. Last year, Verizon established a technical forum dedicated to hurrying along its development, and became the first U.S. company to promise a commercial deployment in 2017.

Critics have warned Verizon about upsetting the apple cart of international standards-making for 5G, a formal process that isn’t scheduled to conclude until 2020. But Verizon has insisted that its 5G network will be ready to deliver fixed wireless service (that which is delivered between two stationary points, such as a base station and a rooftop antenna) to customers this year.

The future of that network is largely in the hands of Sanyogita Shamsunder, Verizon’s director of network planning. She leads the team of 15 engineers who are crunching data from early trials, weighing potential business models, and generally laying the groundwork for the company’s ambitious 5G plans.

Shamsunder, who works at Verizon’s operations headquarters in Basking Ridge, N.J., began her engineering career in the mid-1990s, just as the wireless industry was starting to take off. A decade later, she successfully led Verizon’s rollout of LTE, for which she drafted the technical specifications that smartphone manufacturers used to make sure their devices functioned on Verizon’s network.

That experience made her the obvious choice when the company needed someone to steer its massive network to the promised land of 5G. Today, her job consists of managing Verizon’s team of 5G network planners, which largely consists of engineers and technologists—a leadership role for which her own technical background hadn’t specifically prepared her. As a fellow engineer, she focuses on assigning her team to high-level problems and helping them find solutions.

In her managerial role, Shamsunder often finds she has to nudge her group to make decisions and remind them to take more risks. “They like to lay out all the cases and say, ‘You decide,’ ” she says. “I think when you're working at that level, you need to be able to make decisions. I think many engineers have a difficult time doing that.”

Shamsunder hasn’t always envisioned herself in an executive role. She grew up in the city of Hyderabad, India, and earned her undergraduate degree in electrical engineering and telecom from nearby Osmania University. With it, she became the lone engineer in a family of doctors. That meant “no one could help me with my math,” she jokes.

She thought about taking a job in the industry right away, but instead landed at the University of Virginia, where she completed her Ph.D. in electrical engineering and wrote a thesis on signal processing. “I loved the mathematics behind communications and signal processing in general,” she says.

After spending a few years teaching courses on signal processing as an assistant professor at Colorado State, Shamsunder found her first job in the telecom industry. She became a senior engineer at Stanford Telecommunications, a company that made components for cable modems and TV set-top boxes.

Today, her time at Stanford still stands out as the pivotal experience that persuaded her to abandon the academic world for good. “It was a place where you could apply some of the things you learned in your Ph.D. to cool, practical problems, and that's what really got me interested,” she says.

After her stint at Stanford, Shamsunder switched to working on base stations for Lucent (a telecom company that has since merged with Alcatel and been acquired by Nokia), and later became a principal engineer at a startup called Sandbridge Technologies. At Sandbridge, she built software-defined radio for mobile phones.

During her five-year tenure at Sandbridge, she found herself increasingly involved in discussions about the customer value proposition of specific products. She gradually became more interested in the broader business, beyond her own projects. “There's a lot of good technology around today, but then the business model makes it very difficult to be successful,” she says. “I think it's very important to understand that.”

After a brief stint developing hardware platforms for mobile devices at LinQuest, a semiconductor company, Shamsunder joined Verizon in 2007 as a director in charge of the company’s wireless and technology strategy.

Her first task was to build a team of people from scratch to work with Nokia, Ericsson, Intel, and Samsung on the launch of LTE. Her team’s job was to make sure the devices that manufacturers built would run on Verizon’s network. She led that project for three years, and Verizon’s launch of LTE in 2010 was her proudest professional moment.

Shortly after joining Verizon, Shamsunder also set out to earn her Executive MBA at the Wharton School at the University of Pennsylvania. She wanted to learn how to position products, manage a team, and conduct consumer research. That meant she woke up at 6 a.m., every other Friday, to drive to Philadelphia for two full days of stacked courses. On Saturday night, she returned home to her husband and two young kids.

One of her most memorable lessons from Wharton came as Shamsunder was sitting in the classroom when the iPhone launched in 2007. At that time, AT&T was the only carrier to support it. “All my classmates were like, Why don’t you have this?” she says.

She’d prefer to avoid such questions with 5G. For the past year and a half, her team has coordinated research, development, and testing of several technologies that could bring faster data speeds and lower latency to both base stations and devices.

So far, high-frequency millimeter waves appear to be the leading candidate, as Verizon plans to use them to deploy fixed wireless 5G service this year. “I think fixed wireless is a great use case for us, and for the industry in general, because you can test all the elements in a more controlled environment where there's very limited mobility,” she says. “What we've seen so far doesn't give us any pause to stop and question this.”

But there’s two sides to every coin. Along with the thrill of 5G and the privilege of shaping Verizon’s future network also comes a tremendous amount of pressure. But Shamsunder prefers it that way, and always has, from her first days in the budding wireless industry. “I’m a problem solver; I’m an engineer at heart,” she says. “I like challenges, and it's more fun to go into uncharted territory.”

Three elderly male panelists and one younger female moderator sit on a stage above an audience. Another eldery man can be seen on a large video screen.

Avoiding Future Disasters and NASA's Memory Problem

50 years ago, on January 27th, 1967, three astronauts climbed into an Apollo capsule perched atop a Saturn 1B, the smaller cousin of the Saturn V that would be later used to send astronauts to the moon. The three astronauts—Gus Grissom, a Mercury program veteran, Ed White, the first American to walk in space, and Roger Chaffee, a spaceflight rookie—were not planning on going anywhere. They were doing a test: the goal was to simply operate the spacecraft while disconnected from ground support equipment as if it was in orbit, not just sitting on a launch pad at Kennedy Space Center in Florida. The capsule was sealed up, and the astronauts began working through the test procedures. A few hours later, an electrical fire broke out and killed the crew before they could escape the capsule.

Last week, NASA held many commemorations for the anniversary of the Apollo 1 fire. But a forward-looking event at the astronaut base at the Johnson Space Center in Houston stands out as particularly apposite. In particular, a panel of emeritus experts discussed what space workers must stop forgetting about what the Apollo 1 fire—and the subsequent 1986 Challenger and 2003 Columbia space shuttle disasters—has to teach.

The veteran program workers discussed their insights in front of a packed house, and the emcee—a freshly minted astronaut from the class of 2012—drove the need for such reminders home with a simple request. After asking those in the audience who had worked on Apollo to rise (about 5 percent did, to applause) she asked for those who had come to work after 2003 (and so hadn’t been present for any of the disasters) to rise next.  Almost half of the gathering did so.

Although the immediate source of disaster was different in each case—a fire in a cabin filled with pure oxygen for Apollo 1, a cracked O-ring in a booster for Challenger, and an insulating foam strike on a heat shield for Columbia—“The commonality of the causes of all three catastrophes is sobering,” said panelist Gary Johnson.

Johnson is a retired safety expert who, as a 27-year-old electrical engineer in 1967, had been thrown into the heart of the Apollo 1 fire investigation. He had been the only electrical specialist at the console in the control center in Houston during the routine test, had noticed a sudden “Main Bus A/B” alarm light, then heard the shouts of ‘Fire!’ Within minutes, Johnson recalled, the control room doors were locked, those present were given one phone call to tell their families they’d not be home that night, and the teams plunged into capturing all of the data that had been flowing to Houston from the test up to the moment of the catastrophe.

Within days Johnson was crawling around inside the burnt-out capsule in Florida, examining the remains of cable trays and other wiring. He also was meticulously poring over the close-out photos of the cabin prior to the test run, identifying frayed or even dangling insulation on cabling. And he helped set up test fires in a simulated capsule with wiring matching what he saw had been inside Apollo-1, in the same high oxygen environment—and remembers being shocked by the ferocity of the flames that a single spark could trigger. 

Johnson described how the fundamental design change to the Apollo spacecraft that was made in the wake of the fire—aside from a quick opening hatch and the decision to never to fill the cabin at full pressure with pure oxygen—was installing secure cable trays and conduits to prevent chaffing of the insulation around wires. “Gemini [spacecraft] were constructed with all the wiring outside the crew cabin,” he recalled, “but in Apollo the new contractor ran wiring bundles all over the walls and floor of the spacecraft, wrapped in taped-on insulation bought at a local hardware store.” The wires were supposedly protected by temporary panels installed for maintenance, but it was haphazard at best. Grimly, post-fire analysis found too many potential sparking sites to actually even guess which one had been the fire starter. 

For the Apollo 1 fire, it was clear that the kind of tests that Johnson had performed after the fatal disaster should have been performed by any prudent design team before the astronauts climbed into the capsule. The “assumption of goodness”—the feeling that “it’ll be OK”—had become a rationalization for skipping such tests under the pressure of dominant goals, such as schedules.

Similar testing to challenge any assumption of goodness was also skipped in the lead-up to the two shuttle disasters which also were commemorated with events last week: the anniversary of the destruction of Challenger and its seven-person crew is January 28, while the anniversary of the loss of Columbia, with seven more astronauts, is February 1. Consequently, awareness of potentially fatal flaws eluded the teams in charge of those missions, too.  

Most famously, the loss of Challenger was caused by assuming that flexible O-ring seals in the booster engines would seat properly at ignition even though the ambient temperature was lower than in the pre-flight testing range. Physicist Richard Feynman, a member of the investigation team, performed a simple experiment with a bucket of ice and a sample of the material to show that the assumption—which a shuttle team member had questioned just before launch—was not valid.

The “too late” test that could have prevented the breakup of Columbia was conducted several months after that disaster, under the leadership of investigation team scientist Scott Hubbard. A piece of fuel tank insulation foam had (as on earlier flights) been seen to tear off the tank early in the flight and impact under the left wing’s leading edge. Using a target of a flown thermal protection system panel and a high-velocity airgun, investigators fired the foam onto the panel at the same angle and speed as occurred during the Columbia foam impact, and tore a 50-centimeter hole in the target. Pre-flight impact testing had only used simulated grain-sized space debris, but never the kind of foam that—for years—had been observed tearing free from the tanks.  

Coming up with verification tests is fundamentally a challenge in operational engineering, but another panelist—Glynn Lunney, a flight director in mission control for the near-fatal Apollo 13 lunar mission and who later played important roles during the shuttle program—stressed that giving safety teams enough authority to demand such tests and object when they weren’t thorough enough was an organizational challenge. Whenever policy backing the authority of safety teams weakened, it laid the foundations for future imprudent decisions that led to new catastrophes. Though unable to attend due to illness, Frank Borman—the Gemini and Apollo astronaut who had been in charge of the Apollo 1 investigation and the bureaucratic reforms that followed—endorsed Lunney’s insights in a prerecorded set of answers to questions.

Borman demurred when asked whether schedule pressure was a factor in omitting certain tests,  affirming his belief that setting schedules was a constructive motivation to prioritizing problems to be solved. “You really have to manage time as a resource,” Lunney explained. “Big and small things come at you, prioritization of attention is what you have to be tuned into,” he added. Two decades later, after the Challenger was lost, the question of schedule-induced carelessness again came up, but rather than prioritizing problems, investigators found the pressure to fly was based on the need to impress Congress with the shuttle’s timeliness in order to convince them to use the shuttle for all satellite launches, rather than funding alternative rockets for military launches. 

Walt Cunningham, one of the astronauts on the Apollo 1 backup crew, admitted that the pilots were realistic about the possibilities of disasters. “We figured at some point we’d lose a crew, then learn from it and fix things and go on,” he told the hushed auditorium. NASA certainly did so as a consequence of Apollo 1, but as the symposium stressed, somehow it hadn’t figured out how to maintain the fixes in the organizational charts and in the minds of all of its workers, because periodically it had to relearn the same lessons at the same lamentable cost. Emotionally impactful events such as those held in memory of Apollo 1’s fallen astronauts may represent some of the best chances to avoid forgetting those lessons.

Michael F. Tompsett taking a selfie with Eric R. Fossum and Nobukazu Teranishi

Four IEEE Fellows Share Queen Elizabeth Prize for Digital Cameras

Four engineers and IEEE Fellows credited with major innovations in image sensors have won the £1 million Queen Elizabeth Prize for Engineering. Nobel Prize winner George E. Smith and Michael F. Tompsett won for the charge coupled device (CCD) imager, sharing the prize with Nobukazu Teranishi who improved on the CCD by inventing the pinned photodiode and with Eric R. Fossum who invented the CMOS imager, the technology that succeeded the CCD in most applications and allowed for the proliferation of cameras in smartphones and other mobile devices.

In a CCD, each pixel is a potential well. Light falling on the pixel converts to charge by the photoelectric effect. The charge accumulates in the well and is then pushed from well to well until it reaches circuitry that measures the amount of charge and produces a digital representation of it.

The award must be particularly sweet for Tompsett, who missed out on the 2009 Nobel Prize for the CCD imager. At the time of the invention in 1969, Smith was Tompsett’s boss at Bell Laboratories. Smith and the late Willard Boyle came up with the CCD while trying to invent a new kind of memory. Smith has said the invention’s application in imaging was immediately obvious, but it was Tompsett who actually made it happen.

Read More
An experimental setup shows a close-up of an infrared laser as it travels through a lens

Data Centers of the Future Could Send Bits Over Infrared Lasers Instead of Fiber Optic Cables

Go ahead and rip out the hundreds of fiber optic cables that snake between server racks at the nearest data center. Replace them with infrared lasers mounted to the tops of each server rack. Use these lasers to beam data to photoreceptors mounted to more racks across the room. Then, add tiny moveable mirrors to redirect the beams and reconfigure the whole system in an instant.

That’s the vision of Mohsen Kavehrad, a professor of electrical engineering at Penn State. So far, he has built a prototype of one such data link in his lab. Infrared wavelengths are already commonly used to transmit data within fiber optic cables, but now Kavehrad wants to use them to send data over the air. With his system, he has shown that infrared lasers can deliver data rates of 10 gigabits per second.

Kavehrad has published multiple studies on this approach and presented his research today at the Photonics West conference in San Francisco.

Someday, he hopes such lasers could replace the heavy bundles of fiber optic cables found in modern data centers. “If you visit one of these data centers, it's like a jungle, with fiber going between servers,” he says.

In the demonstration system, Kavehrad used lasers to generate an infrared signal with a wavelength of 1550 nanometers, which is a common wavelength used in fiber optic cables. That signal underwent wavelength division multiplexing, which is a technique that packs more signals with multiple wavelengths onto a single laser beam. Then, he sent the beam through an inexpensive lens.

About 15 meters away, he set up another lens and several photodiode receivers. To make the beam steerable, Kavehrad added tiny mirrors, just 2 millimeters in diameter, powered by MEMS, or microelectromechanical systems. The link is bidirectional, which means both ends can send and receive data.

In addition to the infrared signal, the group also broadcast a TV signal using the same setup. They generated it by feeding the entire 1 gigahertz cable TV band into their multiplexer, so it rode along on the same laser beam as the rest of the data. At the other end, they set up an LED TV to show the working channels.

Depending on how many of these links were installed in a data center, Kavehrad thinks his approach could deliver bandwidth and throughput that is as good or better than the fiber optic cables, routers, and switches used today. He says an infrared system should easily be able to handle terabytes of data, given modern improvements in lasers and photodetectors. Kavehrad also hopes the mirrors will allow operators to more quickly respond to fluctuations in demand, and improve the efficiency of these large operations.

Data centers in the U.S. account for about 2 percent of total electricity consumed in the country. Much of that electricity is spent cooling the 400,000 or so servers they contain. Since data centers are built for maximum demand, roughly 30 percent of these servers are idle at any given time. That means a lot of energy is spent cooling servers that aren’t even running.

Kavehrad thinks infrared lasers could allow operators to more easily reconfigure server racks so that all the servers that need cooling are in one area, rather than spread out all over the data center. It’s not yet clear how much electricity this might save, or whether the cost of installing lasers would outstrip the savings. His prototype cost about $20,000 to build, though he expects equipment costs would quickly drop if major companies showed an interest, and if integrated electronics continues to see advances.

Jonathan Koomey, a consultant who has researched the energy efficiency of data centers, says it’s hard to know if Kavehrad’s idea will catch on with Google or Netflix, but suggests it could find a smaller market somewhere, perhaps with supercomputers. “Even if it's not something that finds broad use, maybe there are some narrow niche applications that could be critically important,” he says.

Before testing infrared, Kavehrad and his collaborators at Stony Brook University and Carnegie Mellon University wanted to see if high-frequency millimeter waves could instead be a replacement for cables. These waves fall between infrared and conventional radio waves on the electromagnetic spectrum. Unfortunately, the millimeter waves in their tests attenuated, or lost strength, when transmitted over just 10 meters, and “the interference was a killer,” he says.

Once they switched to infrared, the team played it safe and purchased an amplifier so they could boost the signal’s strength. But found they didn’t need it. Instead, they actually had to weaken the infrared signal once it arrived at the receiver because it was too strong for their equipment to handle. “If you have to actually attenuate [the signal] to go into receivers, that means you're in very good shape,” Kavehrad says.

Still, there are other issues that Kavehrad needs to work out. Just as a laptop hums when it’s working hard, server racks vibrate as they process and transfer data. Kavehrad’s team is concerned this vibration might affect the accuracy of their lasers. “If you start vibrating light that's really focused, that’s going to cause a severe loss of data,” he says.

Editor’s note: This post was updated on 13 February. 

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More