Tech Talk iconTech Talk

A NIST employee in a safety vest examines a wireless experiment inside of a steam generation plant.

Factory Owners Are Reluctant to Embrace Wireless

If you think it’s hard to get a reliable Wi-Fi signal in your home, just imagine how tough it must be grab one atop an oil rig in the Gulf of Mexico, or on the noisy floor of an auto factory in Detroit. Those places are full of heat, vibration, and metallic surfaces that can weaken, reflect, and block signals. As a result, factories and industrial facilities have been slow to adopt new wireless equipment and devices that would otherwise save both time and money.

Many wireless engineers and factory owners know this, but it has been difficult for anyone to improve the situation. The impact of industrial settings on wireless performance hasn’t been studied in any systematic way, so it’s often impossible to predict how a new piece of equipment will perform on, say, a manufacturing line until you actually put it there.

To make it easier for factories to integrate new wireless technologies, U.S. federal government employees took it upon themselves to measure the performance of radiofrequency signals in three factory settings: an auto transmission assembly facility, a steam generation plant, and a small machine shop. They recently published their results as part of an ongoing $5.75 million project aimed at improving industrial wireless led by the National Institute of Standards and Technology (NIST).

For factory owners, there are many potential advantages to switching to wireless. They can avoid the costs and hassle of installing wires, and more easily reconfigure their facilities in the future. Wireless setups may also be safer, because employees won’t trip over bundles of cords. That’s why companies including GM, Ford, Chevron, Boeing, and Phoenix Contact (a company that specializes in industrial technologies) have all expressed interest in incorporating more wireless into these facilities.

“Right now I know that people are interested, but what they're worried about are the impacts to productivity or to the operation,” says Richard Candell, the project lead for the five-year NIST project, which is scheduled to conclude in late 2018. “They want to know that if they're going to use wireless, it's going to work just as well as the wired solution.”

Justin Shade, who focuses on wireless products for Phoenix Contact, says there’s no shortage of ways in which wireless could make factories and their workers more efficient. For example, manufacturers could use it to incorporate robotic arms into assembly lines. Today, robotic arms are often hooked up to control panels by flexible cables. Wind turbines rely on similar cables to maintain contact between the hub of the turbine and each individual blade. But these cables frequently break. In both cases, replacing them with wireless controls could save money and time.

Unfortunately, factories are also full of processes and materials that block or weaken wireless signals. For now, wireless technicians play it safe when installing new equipment by setting up redundancies, keeping wireless devices within close range with clear line of sight to their targets, and performing extensive testing prior to industrial installations.

Given the circumstances, Shade says it’s hard to fault factory owners and their technicians for being cautious. “If you're on the manufacturing line and a car door doesn't get made correctly, you're losing hundreds of thousands of dollars an hour, so the adoption has been a little slower in the industrial world,” he says.

Candell at NIST hopes their latest research can help industry operators predict how new systems will perform before they are installed. To take their measurements, the team visited an auto transmission assembly plant in Detroit, Mich., a steam generation plant at the NIST campus in Boulder, Colo., and a small machine shop that specializes in metalworking for NIST at their facilities in Gaithersburg, Md.

The group tested wireless signal propagation at two frequencies: 2.25 gigahertz and 5.4 GHz. These frequencies are reserved for the U.S. government, but fall close to the popular unlicensed 2.4-GHz and 5-GHz bands commonly used in wireless devices. Performance at these frequencies can therefore be considered comparable to what can be expected for wireless gadgets the rest of us use.

From their measurements, the researchers concluded that industrial settings have strong multipath characteristics, which means that signals tend to reflect many more times before they reach the receiver than they would under normal conditions. The practical impact of these reflections can be positive or negative, depending on the technology and how it is configured.

To dig deeper, the group used a metric to measure wireless performance called the K factor. It compares the combined power of all the reflected signals to the power of a line-of-sight signal with no reflections. A higher K factor means there is less fading due to reflections. In an open outdoor area, the K factor would typically be between 6 decibels and 30 dB. In the group's industrial measurements, they found lower average K factors of -5 dB to 6 dB.

Next, the NIST team used their measurements to estimate the average delay spread for the industrial facilities. Delay spread is the time it takes for all of a signal’s reflections to reach the receiver. They found an average delay spread below 500 nanoseconds. The group suggests this delay may not noticeably impact devices operating at 256 kilobits per second but could affect those that run at faster bit rates.

Another part of their analysis examined wireless performance in “metal canyons,” which are common in factories. A metal canyon is an area with metal surfaces (such as walls or large pieces of equipment) on at least two sides and a concrete floor below. In these areas, the group measured path loss, which describes the attenuation of wireless signals, and found that it is 80 dB, at a minimum, in metal canyons. For comparison, the path loss in an open area would be perhaps 40 dB after a signal at these frequencies traveled approximately one meter.

Candell says that, in practical terms, this means a wireless signal could reliably travel about 200 or 300 meters outdoors, whereas, in a metal canyon, a user would probably start to notice some issues with the signal at just 30 meters away. 

With the results of their measurement campaigns, the NIST staff also built a software simulation of a chemical reactor and a wireless test bed that can replicate other industrial settings at their campus in Boulder, Colo. Candell wants to use these tools to generate hypothetical changes in performance and cost related to installing new wireless schemes in factories or other facilities.

“Ultimately, at the end of our five-year project [which is scheduled to conclude in late 2018], I want to actually produce industry guidelines to help people select and deploy these wireless devices effectively in their factories,” says Candell.

A hiker in a yellow jack looks at her smartphone.

Controversial Satellite-Messaging Startup Higher Ground Cleared for Takeoff

In the face of concerted industry opposition, the Federal Communications Commission (FCC) has given the go-ahead for a controversial smartphone accessory that uses microwaves to send text messages and email via geostationary satellites.

Startup Higher Ground now has permission to deploy up to 50,000 SatPaq devices across the United States, promising isolated communities, hikers, and farmers a cheap, reliable messaging service far from cellphone towers. However, it is a move that some telecoms companies think could also interfere with their services, interrupt life-saving emergency calls and even cause outages nationwide. The roll-out will be a key test of the FCC’s ability to manage spectrum sharing, an innovation it is counting on to enable future 5G wireless and Internet of Things technologies.

The SatPaq devices, first revealed in Spectrum last year, connect to a smartphone messaging app via Bluetooth. The device uses a flip-up antenna that communicates with Intelsat Galaxy satellites in geostationary orbits. These are nearly 50 times further out than the Iridium satellites used by today’s satphones, so the SatPaq needs a powerful signal to connect.

It’s that strong signal—smack in the middle of the C-band microwave spectrum used for voice and data communications in rural areas and for national networks—that has many telecoms companies worried. In a submission to the FCC, CenturyLink called Higher Grounds’ plans “a recipe for disaster” and a “potential interference to each and every…link of the [microwave] network throughout the country.”

Its concerns were echoed by a dozen telecoms industry bodies and cities and states that rely heavily on point-to-point microwave stations for communications and emergency services. The state of Hawaii even wrote, “If this type of application is granted, the FCC itself becomes irrelevant. Commercial entities can simply do whatever pleases themselves.”

For its part, Higher Ground claims a robust system of ‘self-coordination’ that makes the chance of interference almost negligible. The SatPaq app starts by comparing the phone’s GPS coordinates with a database of the locations of all the terrestrial microwave stations in the country. It then selects a non-interfering frequency within its 5925 to 6425 MHz uplink band.

The app then uses the phone’s compass to ensure that the flip-up antenna is pointed directly at the satellite, and not towards a fixed station. If the system cannot find a safe combination of frequency and direction, it will not transmit. When the SatPaq does connect to the satellite, it will download any changes to its station database before transmitting its own data.

Last summer, Higher Ground conducted outdoor demonstrations of a live SatPaq embedded in a smartphone case to FCC officials and some of the telecoms companies, showing both the interference mitigation technology and the messaging service in action.

On 18 January, just 48 hours before the start of the new Presidential administration, the FCC ruled in Higher Ground’s favor. “We…find that Higher Ground’s proposed system and operations…would further the Commission’s interest in ensuring the highest public benefit is derived from this finite spectrum resource,” wrote the Commissioners. “We [also] find that Higher Ground has demonstrated that its proposed system should prevent or minimize the risk of harmful interference to [fixed service] operators.”

But the FCC did place conditions on Higher Ground’s operations. The company had to accept that existing microwave stations might interfere with its new messaging service, and is required to maintain remote control of all the SatPaqs in the country. If any interference comes to light, Higher Ground must be able to immediately override or shut down any or all interfering SatPaqs. The company also has to keep logs of every single SatPaq transmission for at least a year, and make that data available to the FCC and fixed service operators on request. Higher Ground also has to update its database of terrestrial microwave stations every day.

Finally, the FCC noted that “a cautious approach is warranted, considering that a self-coordination system like Higher Ground's does not have a track record of wide-scale, generalized deployment.” For the first year following authorization, Higher Ground can deploy only 5000 SatPaqs, and the FCC reserves the right to shut them down if they cause harmful interference.

“This is a prudent move for a unique technology,” says Steve Crowley, a consulting wireless engineer. “The phased rollout is an additional measure in case of unintended consequences. It’s easier to get a handle on 5,000 radios than 50,000.”

But Higher Ground’s battles may not be over just yet. The Enterprise Wireless Alliance, a national trade association for business wireless users, is considering filing a last-ditch appeal.

“At its core, this is an engineering matter and I think those engineering matters have been resolved to a reasonable level,” says Crowley. “But the Order was issued just before the start of the current FCC and only one of its three signers holds the same position as they did on January 18. A petitioner whose arguments didn’t prevail with the previous FCC might try again with this one.”

Higher Ground declined to comment on the FCC Order or any plans it might have to start selling SatPaqs. Its website, which previously suggested that SatPaqs would sell for US $139, with pay-as-you-go texts and emails, is currently offline.

5 Things You Missed This Week at IEEE Spectrum: Nanorods for Li-Fi Displays, Health Apps Could Make People Sicker, and More

1. Nanorods Emit and Detect Light, Could Lead to Displays That Communicate via Li-Fi

In recent years, the hot application for quantum dots has been as a replacement for light-emitting diodes (LEDs) as a backlight source for liquid crystal displays. But now, an international team of researchers has produced engineered nanorods that each feature a quantum dot capable of emitting and absorbing visible light. With this advance, quantum dots could someday yield mobile phones that can “see” without the need of a camera lens or communicate with each other using Light Fidelity (Li-Fi) technology.

 

2. Could Mobile Health Apps and Wearables Actually Make People Sicker?

A recent opinion piece about wearable tech for infants pulls no punches: “There is no evidence that consumer infant physiologic monitors are life-saving, and there is potential for harm if parents choose to use them.” That wasn’t just any random person’s judgement. The article was published in the Journal of the American Medical Association and was authored by two pediatricians and an expert from the ECRI Institute, a nonprofit organization dedicated to the rigorous evaluation of medical procedures and devices. 

 

3. Medtronic's CardioInsight Electrode Vest Maps Heart's Electrical System

The 252-electrode device could help doctors pinpoint the locations of electrical malfunctions in the heart that cause irregular heartbeats.

 

4. New Terahertz Transmitter Shines With Ultra-Fast Data Speeds

The tiny CMOS-based transmitter can send data packets wirelessly at rates as high as 105 gigabits per second.

 

5. Millimeter-Scale Computers: Now With Deep Learning Neural Networks on Board

University of Michigan micro-mote computers—tiny, energy efficient computing sensors that can do analysis on board—aim to make the Internet of Things smarter without consuming more power.

A millimeter-scale computer looks like a stack of chips

Millimeter-Scale Computers: Now With Deep-Learning Neural Networks on Board

Computer scientist David Blaauw pulls a small plastic box from his bag. He carefully uses his fingernail to pick up the tiny black speck inside and place it on the hotel café table. At 1 cubic millimeter, this is one of a line of the world’s smallest computers. I had to be careful not to cough or sneeze lest it blow away and be swept into the trash.

Blaauw and his colleague Dennis Sylvester, both IEEE Fellows and computer scientists at the University of Michigan, were in San Francisco this week to present 10 papers related to these “micromote” computers at the IEEE International Solid-State Circuits Conference (ISSCC). They’ve been presenting different variations on the tiny devices for a few years.

Their broader goal is to make smarter, smaller sensors for medical devices and the Internet of Things—sensors that can do more with less energy. Many of the microphones, cameras, and other sensors that make up the eyes and ears of smart devices are always on alert, and frequently beam personal data into the cloud because they can’t analyze it themselves. Some have predicted that by 2035, there will be 1 trillion such devices. “If you’ve got a trillion devices producing readings constantly, we’re going to drown in data,” says Blaauw. By developing tiny, energy-efficient computing sensors that can do analysis on board, Blaauw and Sylvester hope to make these devices more secure, while also saving energy.

Read More
A tiny terahertz transmitter is mounted under a microscope in a lab at Hiroshima University.

New Terahertz Transmitter Shines With Ultrafast Data Speeds

This week, researchers at Hiroshima University showed off a new terahertz transmitter that is just as powerful as its predecessors, but should ultimately prove more affordable for commercial applications. In a demo at the International Solid-State Circuits Conference in San Francisco, they presented a device capable of delivering data at breathtaking speeds of more than 100 gigabits per second at a frequency of 300 gigahertz.

At its very best, the transmitter can shuttle data at 105 Gb/s, which is 2,100 times faster than the peak cellular speeds of 50 megabits per second available through LTE. After a successful demo, the transmitter could find its way into future wireless applications that require low latency and high bandwidth.

Though other transmitters have achieved speedy data rates in the terahertz range before, the group says theirs is the first to also be based on a CMOS integrated circuit, which means it’s potentially more viable for commercial base stations or devices.

“This is quite a step for this kind of technology, because it relies on something that is freely available and could be easily implemented, compared to all of the other techniques,” says Riccardo DegI’Innocenti, a researcher at the University of Cambridge who was not involved in the work.

Terahertz waves are shorter in length and are broadcast at much higher frequencies than the microwaves used today for smartphones, household devices, or military radar. For example, Wi-Fi devices emit waves that measure about 12 centimeters in length at a frequency of 2.4 GHz. Waves in the terahertz range span less than 1 millimeter and start at 100 GHz.

Other teams have demonstrated competing terahertz transmitters that deliver data at speeds even faster than those shown by the Hiroshima group. However, these systems often relied on technology that was bulky or which could not easily scale.

In contrast, the new transmitter has a 2-by-3-mm footprint, and was created using a 40-nanometer CMOS process. “There are many ways also to build a terahertz wireless system,” says DegI’Innocenti. “However, this is still progress because the CMOS technology was sort of lagging behind.”

Minoru Fujishima, a professor at Hiroshima University and a member of the team that developed the transmitter, says the primary advantage of fabricating the device with CMOS is that it will allow manufacturers to sell it at a competitive price if it is commercialized. However, the first run was still rather expensive. The tiny transmitter he demonstrated cost US $100,000 to build.

Fujishima’s group hopes their transmitter can be used in satellite communications, or to set up a wireless link between cellular base stations. “I think that is a very promising application because space cannot be linked by fiber optics,” he says.

Elsewhere, companies and researchers have developed extra-sensitive receivers to reliably detect terahertz waves, which are quickly absorbed as they travel through the atmosphere.

Thomas Küerner, who has worked at TU Braunschweig in Germany on projects in which terahertz transmitters have been developed, calls the new research “quite a milestone.” Alongside Iwao Hosako, who is a coauthor with Fujishima, Küerner is leading the IEEE 802.15 Task Group 3d; the group’s mission is to develop a standard for devices that will operate in the 300-GHz band.

Küerner says the task group is considering four primary applications for 300-GHz devices. One is as a replacement for the wires inside devices with high-speed terahertz links that can send data from one part of the device to another. The second is using terahertz waves to enable the creation of wireless kiosks in retail stores that will let customers instantly download films to their devices instead of having to take a DVD home with them. The third, says Küerner, is to create wireless connections for data centers that can replace fiber optic cables. And the final application is to use terahertz waves for fronthaul or backhaul in cellular networks.

A self-destruct mechanism based on an expanding polymer layer can destroy a silicon chip within 10 seconds

Self-Destructing Gadgets Made Not So Mission Impossible

Self-destruct options from the Mission: Impossible movies could become a reality for even the most common smartphones and laptops used by government officials or corporate employees. A new self-destruct mechanism can destroy electronics within 10 seconds through wireless commands or the triggering of certain sensors.

Read More
Can a Bitcoin-enabled browser be the publishing industry's savior?

Can Brave's Bitcoin Payment Platform Save Online Publishing?

Last year, Brendan Eich, former CEO of the Mozilla corporation and designer the Javascript programming language, launched Brave, a Web browser that blocks advertisements by default. Now Eich is rolling out a new Bitcoin payment platform, integrated right into the browser, that he hopes will provide an alternative revenue stream for publishers. He views it as a replacement the one Brave takes away, which he argues is dysfunctional and on the verge of collapse.

As of September, people using Brave have the option of creating a wallet in the browser, loading it with bitcoins, and sending small payments to publishers based on the anonymized metering of their Web traffic. For now, Brave plays a central role in facilitating the transactions, although it has sought to do so in a way that protects the privacy of Brave users.

When you create a wallet with Brave, you actually share it with a company called BitGo, meaning that you and BitGo each own one key for the wallet, both of which need to be present in order for a payment to go through. After loading bitcoins into this wallet, you specify the total amount of money you would like to spend on your Web browsing. Then, after a month goes by (measured by the days you actually spend using the Brave browser), bitcoin transactions signed by both you and BitGo trigger the disbursement of that money into a Brave settlement wallet.

Before a website operator can collect the funds, it must go through a verification process with Brave to prove that it’s running a legitimate business. In return for providing this service, Brave takes five percent of all the donations that come through.

Read More
Job training and MOOCs

Why Your Next Job Training Course May Be a MOOC

This is part of a series on MOOC and online learning

Over the past two decades, the great Internet wave that swept through industry and revolutionized everything in its wake—including manufacturing, product development, supply-chain management, marketing, financial transactions, and customer service—likewise transformed on-the-job training. Companies eager to cut costs saw the overwhelming economic advantage of online instruction over the conventional classroom, and so they shuttered lavish country-club-style training parks and canceled employee travel to professional development courses in exotic locales. These days, most workers tend to receive their training at their desks, the better to maintain productivity.

Web instruction has also helped companies expand internationally because they can easily circulate self-learning modules to a geographically dispersed labor force at relatively low cost. As Australian scholar Paul Nicholson observed, “E-learning in business and training [is] driven by notions of improved productivity and cost reduction, especially in an increasingly globalized business environment.”

Over the past decade, employee enrollment in online programs has grown 20 times faster than has student enrollment at traditional colleges and universities. By 2020, 60 percent of workers receiving tuition reimbursement will be enrolled in online programs, according to EdAssist, a corporate tuition-assistance consulting firm.

Yet despite the corporate romance with online training for employees, companies have had a more troubled relationship with the virtual education offered by colleges and universities. When digital university programs first became available in the mid-1990s, many companies simply ignored them, refusing to provide tuition assistance to employees who enrolled in digital degree programs. Later, when it became apparent that some of the nation’s most selective schools actually offered high-quality online master’s degrees, especially in fields that paralleled industry needs, businesses grew more accepting. 

To be sure, not every program offered a high-quality education, and a number of companies unwittingly allowed their employees to enroll in for-profit online schools that turned out to be scams. “For a time, companies were not as serious about vetting universities as they are today,” says Allan Weisberg, former chief learning officer at Johnson & Johnson. “When we finally looked into some for-profits, we discovered they were scams, and turned them down.”

A number of Fortune 500 companies responded by setting stricter rules on their tuition-reimbursement programs to prevent unsuspecting employees from throwing away money—the company’s as well as their own—on discredited programs at for-profits and other substandard schools. Other companies sensibly steered their workers toward approved universities, which must be ABET-accredited, perform serious research that parallels the firm’s own research interests, and employ significant numbers of the school’s own alumni. “Today, wise companies invest their tuition dollars in established non-profit and public schools,” says Weisberg. “With stricter polices, companies want to make sure that tuition assistance is valuable to all parties—employees, corporations, and universities.”

Ideally, online training should give personnel the chance to acquire new and valuable skills, perhaps in emerging fields like cyber security or data science. Such training helps the company, of course, and it also gives workers an edge in a tricky economy. Earning a degree online is also a huge convenience for workers, whose days are already filled as it is. A mid-career engineer with job, family, and travel responsibilities can more easily study online at his or her own pace—at 10 at night after the kids are in bed—than commute to campus.

Given that the switch to online job training was largely a cost-cutting move, it’s only natural that when MOOCs—massive open online courses—came on the scene in 2011, companies were curious. Because they’re designed to reach hundreds or thousands of students at once, MOOCs benefit from economies of scale that smaller online programs don’t share.

Google and Instagram are experimenting with MOOC provider Coursera’s “Specializations,” which are groups of related courses in key areas of interest to industry. The fee for a Coursera Specialization runs from $150 to $500 for anywhere from three to ten courses, plus a capstone project. The most popular offerings include data science (from Johns Hopkins University), Python (from the University of Michigan), and machine learning (from the University of Washington). Compared to the thousands of dollars for a more conventional training program, MOOCs are a relative bargain. And if a company’s aim is for workers to quickly acquire in-demand skills, rather than earning an accredited degree that may take a year or more to complete, a set of focused MOOCs may be the way to go. This skills-centered approach, known in education circles as competency-based education, is a growing trend at U.S. schools.

But before companies jump on the MOOC bandwagon, they might consider whether their ideal employee is someone with up-to-date skills in a narrow specialty, or a truly thoughtful professional who is prepared to go beyond his or her defined tasks and can adapt flexibly to new conditions and new markets. Ultimately, industry must decide who will fill the labor pipeline: an army of MOOC-trained workers or deeply talented personnel who’ve earned richly complex degrees from the nation’s best universities.

About the Author:

Robert Ubell is Vice Dean Emeritus of Online Learning at NYU’s Tandon School of Engineering. A collection of his essays on digital education, Going Online: Perspectives on Digital Learning, was recently published by Routledge. He can be reached at bobubell@gmail.comThis is the last in a series on MOOCs and online learning.

Sanyogita Shamsunder, Verizon's director of network planning, is shown standing outside near equipment used to test new base station technology.

Profile: Sanyogita Shamsunder, the “Problem Solver” Behind Verizon’s 5G Network

Ask anyone in telecom and they’ll tell you that Verizon has been the most aggressive of any U.S. company in forging ahead on 5G, the highly anticipated wireless network of the future. Last year, Verizon established a technical forum dedicated to hurrying along its development, and became the first U.S. company to promise a commercial deployment in 2017.

Critics have warned Verizon about upsetting the apple cart of international standards-making for 5G, a formal process that isn’t scheduled to conclude until 2020. But Verizon has insisted that its 5G network will be ready to deliver fixed wireless service (that which is delivered between two stationary points, such as a base station and a rooftop antenna) to customers this year.

The future of that network is largely in the hands of Sanyogita Shamsunder, Verizon’s director of network planning. She leads the team of 15 engineers who are crunching data from early trials, weighing potential business models, and generally laying the groundwork for the company’s ambitious 5G plans.

Shamsunder, who works at Verizon’s operations headquarters in Basking Ridge, N.J., began her engineering career in the mid-1990s, just as the wireless industry was starting to take off. A decade later, she successfully led Verizon’s rollout of LTE, for which she drafted the technical specifications that smartphone manufacturers used to make sure their devices functioned on Verizon’s network.

That experience made her the obvious choice when the company needed someone to steer its massive network to the promised land of 5G. Today, her job consists of managing Verizon’s team of 5G network planners, which largely consists of engineers and technologists—a leadership role for which her own technical background hadn’t specifically prepared her. As a fellow engineer, she focuses on assigning her team to high-level problems and helping them find solutions.

In her managerial role, Shamsunder often finds she has to nudge her group to make decisions and remind them to take more risks. “They like to lay out all the cases and say, ‘You decide,’ ” she says. “I think when you're working at that level, you need to be able to make decisions. I think many engineers have a difficult time doing that.”

Shamsunder hasn’t always envisioned herself in an executive role. She grew up in the city of Hyderabad, India, and earned her undergraduate degree in electrical engineering and telecom from nearby Osmania University. With it, she became the lone engineer in a family of doctors. That meant “no one could help me with my math,” she jokes.

She thought about taking a job in the industry right away, but instead landed at the University of Virginia, where she completed her Ph.D. in electrical engineering and wrote a thesis on signal processing. “I loved the mathematics behind communications and signal processing in general,” she says.

After spending a few years teaching courses on signal processing as an assistant professor at Colorado State, Shamsunder found her first job in the telecom industry. She became a senior engineer at Stanford Telecommunications, a company that made components for cable modems and TV set-top boxes.

Today, her time at Stanford still stands out as the pivotal experience that persuaded her to abandon the academic world for good. “It was a place where you could apply some of the things you learned in your Ph.D. to cool, practical problems, and that's what really got me interested,” she says.

After her stint at Stanford, Shamsunder switched to working on base stations for Lucent (a telecom company that has since merged with Alcatel and been acquired by Nokia), and later became a principal engineer at a startup called Sandbridge Technologies. At Sandbridge, she built software-defined radio for mobile phones.

During her five-year tenure at Sandbridge, she found herself increasingly involved in discussions about the customer value proposition of specific products. She gradually became more interested in the broader business, beyond her own projects. “There's a lot of good technology around today, but then the business model makes it very difficult to be successful,” she says. “I think it's very important to understand that.”

After a brief stint developing hardware platforms for mobile devices at LinQuest, a semiconductor company, Shamsunder joined Verizon in 2007 as a director in charge of the company’s wireless and technology strategy.

Her first task was to build a team of people from scratch to work with Nokia, Ericsson, Intel, and Samsung on the launch of LTE. Her team’s job was to make sure the devices that manufacturers built would run on Verizon’s network. She led that project for three years, and Verizon’s launch of LTE in 2010 was her proudest professional moment.

Shortly after joining Verizon, Shamsunder also set out to earn her Executive MBA at the Wharton School at the University of Pennsylvania. She wanted to learn how to position products, manage a team, and conduct consumer research. That meant she woke up at 6 a.m., every other Friday, to drive to Philadelphia for two full days of stacked courses. On Saturday night, she returned home to her husband and two young kids.

One of her most memorable lessons from Wharton came as Shamsunder was sitting in the classroom when the iPhone launched in 2007. At that time, AT&T was the only carrier to support it. “All my classmates were like, Why don’t you have this?” she says.

She’d prefer to avoid such questions with 5G. For the past year and a half, her team has coordinated research, development, and testing of several technologies that could bring faster data speeds and lower latency to both base stations and devices.

So far, high-frequency millimeter waves appear to be the leading candidate, as Verizon plans to use them to deploy fixed wireless 5G service this year. “I think fixed wireless is a great use case for us, and for the industry in general, because you can test all the elements in a more controlled environment where there's very limited mobility,” she says. “What we've seen so far doesn't give us any pause to stop and question this.”

But there’s two sides to every coin. Along with the thrill of 5G and the privilege of shaping Verizon’s future network also comes a tremendous amount of pressure. But Shamsunder prefers it that way, and always has, from her first days in the budding wireless industry. “I’m a problem solver; I’m an engineer at heart,” she says. “I like challenges, and it's more fun to go into uncharted territory.”

Three elderly male panelists and one younger female moderator sit on a stage above an audience. Another eldery man can be seen on a large video screen.

Avoiding Future Disasters and NASA's Memory Problem

50 years ago, on January 27th, 1967, three astronauts climbed into an Apollo capsule perched atop a Saturn 1B, the smaller cousin of the Saturn V that would be later used to send astronauts to the moon. The three astronauts—Gus Grissom, a Mercury program veteran, Ed White, the first American to walk in space, and Roger Chaffee, a spaceflight rookie—were not planning on going anywhere. They were doing a test: the goal was to simply operate the spacecraft while disconnected from ground support equipment as if it was in orbit, not just sitting on a launch pad at Kennedy Space Center in Florida. The capsule was sealed up, and the astronauts began working through the test procedures. A few hours later, an electrical fire broke out and killed the crew before they could escape the capsule.

Last week, NASA held many commemorations for the anniversary of the Apollo 1 fire. But a forward-looking event at the astronaut base at the Johnson Space Center in Houston stands out as particularly apposite. In particular, a panel of emeritus experts discussed what space workers must stop forgetting about what the Apollo 1 fire—and the subsequent 1986 Challenger and 2003 Columbia space shuttle disasters—has to teach.

The veteran program workers discussed their insights in front of a packed house, and the emcee—a freshly minted astronaut from the class of 2012—drove the need for such reminders home with a simple request. After asking those in the audience who had worked on Apollo to rise (about 5 percent did, to applause) she asked for those who had come to work after 2003 (and so hadn’t been present for any of the disasters) to rise next.  Almost half of the gathering did so.

Although the immediate source of disaster was different in each case—a fire in a cabin filled with pure oxygen for Apollo 1, a cracked O-ring in a booster for Challenger, and an insulating foam strike on a heat shield for Columbia—“The commonality of the causes of all three catastrophes is sobering,” said panelist Gary Johnson.

Johnson is a retired safety expert who, as a 27-year-old electrical engineer in 1967, had been thrown into the heart of the Apollo 1 fire investigation. He had been the only electrical specialist at the console in the control center in Houston during the routine test, had noticed a sudden “Main Bus A/B” alarm light, then heard the shouts of ‘Fire!’ Within minutes, Johnson recalled, the control room doors were locked, those present were given one phone call to tell their families they’d not be home that night, and the teams plunged into capturing all of the data that had been flowing to Houston from the test up to the moment of the catastrophe.

Within days Johnson was crawling around inside the burnt-out capsule in Florida, examining the remains of cable trays and other wiring. He also was meticulously poring over the close-out photos of the cabin prior to the test run, identifying frayed or even dangling insulation on cabling. And he helped set up test fires in a simulated capsule with wiring matching what he saw had been inside Apollo-1, in the same high oxygen environment—and remembers being shocked by the ferocity of the flames that a single spark could trigger. 

Johnson described how the fundamental design change to the Apollo spacecraft that was made in the wake of the fire—aside from a quick opening hatch and the decision to never to fill the cabin at full pressure with pure oxygen—was installing secure cable trays and conduits to prevent chaffing of the insulation around wires. “Gemini [spacecraft] were constructed with all the wiring outside the crew cabin,” he recalled, “but in Apollo the new contractor ran wiring bundles all over the walls and floor of the spacecraft, wrapped in taped-on insulation bought at a local hardware store.” The wires were supposedly protected by temporary panels installed for maintenance, but it was haphazard at best. Grimly, post-fire analysis found too many potential sparking sites to actually even guess which one had been the fire starter. 

For the Apollo 1 fire, it was clear that the kind of tests that Johnson had performed after the fatal disaster should have been performed by any prudent design team before the astronauts climbed into the capsule. The “assumption of goodness”—the feeling that “it’ll be OK”—had become a rationalization for skipping such tests under the pressure of dominant goals, such as schedules.

Similar testing to challenge any assumption of goodness was also skipped in the lead-up to the two shuttle disasters which also were commemorated with events last week: the anniversary of the destruction of Challenger and its seven-person crew is January 28, while the anniversary of the loss of Columbia, with seven more astronauts, is February 1. Consequently, awareness of potentially fatal flaws eluded the teams in charge of those missions, too.  

Most famously, the loss of Challenger was caused by assuming that flexible O-ring seals in the booster engines would seat properly at ignition even though the ambient temperature was lower than in the pre-flight testing range. Physicist Richard Feynman, a member of the investigation team, performed a simple experiment with a bucket of ice and a sample of the material to show that the assumption—which a shuttle team member had questioned just before launch—was not valid.

The “too late” test that could have prevented the breakup of Columbia was conducted several months after that disaster, under the leadership of investigation team scientist Scott Hubbard. A piece of fuel tank insulation foam had (as on earlier flights) been seen to tear off the tank early in the flight and impact under the left wing’s leading edge. Using a target of a flown thermal protection system panel and a high-velocity airgun, investigators fired the foam onto the panel at the same angle and speed as occurred during the Columbia foam impact, and tore a 50-centimeter hole in the target. Pre-flight impact testing had only used simulated grain-sized space debris, but never the kind of foam that—for years—had been observed tearing free from the tanks.  

Coming up with verification tests is fundamentally a challenge in operational engineering, but another panelist—Glynn Lunney, a flight director in mission control for the near-fatal Apollo 13 lunar mission and who later played important roles during the shuttle program—stressed that giving safety teams enough authority to demand such tests and object when they weren’t thorough enough was an organizational challenge. Whenever policy backing the authority of safety teams weakened, it laid the foundations for future imprudent decisions that led to new catastrophes. Though unable to attend due to illness, Frank Borman—the Gemini and Apollo astronaut who had been in charge of the Apollo 1 investigation and the bureaucratic reforms that followed—endorsed Lunney’s insights in a prerecorded set of answers to questions.

Borman demurred when asked whether schedule pressure was a factor in omitting certain tests,  affirming his belief that setting schedules was a constructive motivation to prioritizing problems to be solved. “You really have to manage time as a resource,” Lunney explained. “Big and small things come at you, prioritization of attention is what you have to be tuned into,” he added. Two decades later, after the Challenger was lost, the question of schedule-induced carelessness again came up, but rather than prioritizing problems, investigators found the pressure to fly was based on the need to impress Congress with the shuttle’s timeliness in order to convince them to use the shuttle for all satellite launches, rather than funding alternative rockets for military launches. 

Walt Cunningham, one of the astronauts on the Apollo 1 backup crew, admitted that the pilots were realistic about the possibilities of disasters. “We figured at some point we’d lose a crew, then learn from it and fix things and go on,” he told the hushed auditorium. NASA certainly did so as a consequence of Apollo 1, but as the symposium stressed, somehow it hadn’t figured out how to maintain the fixes in the organizational charts and in the minds of all of its workers, because periodically it had to relearn the same lessons at the same lamentable cost. Emotionally impactful events such as those held in memory of Apollo 1’s fallen astronauts may represent some of the best chances to avoid forgetting those lessons.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More