Tech Talk iconTech Talk

Three elderly male panelists and one younger female moderator sit on a stage above an audience. Another eldery man can be seen on a large video screen.

Avoiding Future Disasters and NASA's Memory Problem

50 years ago, on January 27th, 1967, three astronauts climbed into an Apollo capsule perched atop a Saturn 1B, the smaller cousin of the Saturn V that would be later used to send astronauts to the moon. The three astronauts—Gus Grissom, a Mercury program veteran, Ed White, the first American to walk in space, and Roger Chaffee, a spaceflight rookie—were not planning on going anywhere. They were doing a test: the goal was to simply operate the spacecraft while disconnected from ground support equipment as if it was in orbit, not just sitting on a launch pad at Kennedy Space Center in Florida. The capsule was sealed up, and the astronauts began working through the test procedures. A few hours later, an electrical fire broke out and killed the crew before they could escape the capsule.

Last week, NASA held many commemorations for the anniversary of the Apollo 1 fire. But a forward-looking event at the astronaut base at the Johnson Space Center in Houston stands out as particularly apposite. In particular, a panel of emeritus experts discussed what space workers must stop forgetting about what the Apollo 1 fire—and the subsequent 1986 Challenger and 2003 Columbia space shuttle disasters—has to teach.

The veteran program workers discussed their insights in front of a packed house, and the emcee—a freshly minted astronaut from the class of 2012—drove the need for such reminders home with a simple request. After asking those in the audience who had worked on Apollo to rise (about 5 percent did, to applause) she asked for those who had come to work after 2003 (and so hadn’t been present for any of the disasters) to rise next.  Almost half of the gathering did so.

Although the immediate source of disaster was different in each case—a fire in a cabin filled with pure oxygen for Apollo 1, a cracked O-ring in a booster for Challenger, and an insulating foam strike on a heat shield for Columbia—“The commonality of the causes of all three catastrophes is sobering,” said panelist Gary Johnson.

Johnson is a retired safety expert who, as a 27-year-old electrical engineer in 1967, had been thrown into the heart of the Apollo 1 fire investigation. He had been the only electrical specialist at the console in the control center in Houston during the routine test, had noticed a sudden “Main Bus A/B” alarm light, then heard the shouts of ‘Fire!’ Within minutes, Johnson recalled, the control room doors were locked, those present were given one phone call to tell their families they’d not be home that night, and the teams plunged into capturing all of the data that had been flowing to Houston from the test up to the moment of the catastrophe.

Within days Johnson was crawling around inside the burnt-out capsule in Florida, examining the remains of cable trays and other wiring. He also was meticulously poring over the close-out photos of the cabin prior to the test run, identifying frayed or even dangling insulation on cabling. And he helped set up test fires in a simulated capsule with wiring matching what he saw had been inside Apollo-1, in the same high oxygen environment—and remembers being shocked by the ferocity of the flames that a single spark could trigger. 

Johnson described how the fundamental design change to the Apollo spacecraft that was made in the wake of the fire—aside from a quick opening hatch and the decision to never to fill the cabin at full pressure with pure oxygen—was installing secure cable trays and conduits to prevent chaffing of the insulation around wires. “Gemini [spacecraft] were constructed with all the wiring outside the crew cabin,” he recalled, “but in Apollo the new contractor ran wiring bundles all over the walls and floor of the spacecraft, wrapped in taped-on insulation bought at a local hardware store.” The wires were supposedly protected by temporary panels installed for maintenance, but it was haphazard at best. Grimly, post-fire analysis found too many potential sparking sites to actually even guess which one had been the fire starter. 

For the Apollo 1 fire, it was clear that the kind of tests that Johnson had performed after the fatal disaster should have been performed by any prudent design team before the astronauts climbed into the capsule. The “assumption of goodness”—the feeling that “it’ll be OK”—had become a rationalization for skipping such tests under the pressure of dominant goals, such as schedules.

Similar testing to challenge any assumption of goodness was also skipped in the lead-up to the two shuttle disasters which also were commemorated with events last week: the anniversary of the destruction of Challenger and its seven-person crew is January 28, while the anniversary of the loss of Columbia, with seven more astronauts, is February 1. Consequently, awareness of potentially fatal flaws eluded the teams in charge of those missions, too.  

Most famously, the loss of Challenger was caused by assuming that flexible O-ring seals in the booster engines would seat properly at ignition even though the ambient temperature was lower than in the pre-flight testing range. Physicist Richard Feynman, a member of the investigation team, performed a simple experiment with a bucket of ice and a sample of the material to show that the assumption—which a shuttle team member had questioned just before launch—was not valid.

The “too late” test that could have prevented the breakup of Columbia was conducted several months after that disaster, under the leadership of investigation team scientist Scott Hubbard. A piece of fuel tank insulation foam had (as on earlier flights) been seen to tear off the tank early in the flight and impact under the left wing’s leading edge. Using a target of a flown thermal protection system panel and a high-velocity airgun, investigators fired the foam onto the panel at the same angle and speed as occurred during the Columbia foam impact, and tore a 50-centimeter hole in the target. Pre-flight impact testing had only used simulated grain-sized space debris, but never the kind of foam that—for years—had been observed tearing free from the tanks.  

Coming up with verification tests is fundamentally a challenge in operational engineering, but another panelist—Glynn Lunney, a flight director in mission control for the near-fatal Apollo 13 lunar mission and who later played important roles during the shuttle program—stressed that giving safety teams enough authority to demand such tests and object when they weren’t thorough enough was an organizational challenge. Whenever policy backing the authority of safety teams weakened, it laid the foundations for future imprudent decisions that led to new catastrophes. Though unable to attend due to illness, Frank Borman—the Gemini and Apollo astronaut who had been in charge of the Apollo 1 investigation and the bureaucratic reforms that followed—endorsed Lunney’s insights in a prerecorded set of answers to questions.

Borman demurred when asked whether schedule pressure was a factor in omitting certain tests,  affirming his belief that setting schedules was a constructive motivation to prioritizing problems to be solved. “You really have to manage time as a resource,” Lunney explained. “Big and small things come at you, prioritization of attention is what you have to be tuned into,” he added. Two decades later, after the Challenger was lost, the question of schedule-induced carelessness again came up, but rather than prioritizing problems, investigators found the pressure to fly was based on the need to impress Congress with the shuttle’s timeliness in order to convince them to use the shuttle for all satellite launches, rather than funding alternative rockets for military launches. 

Walt Cunningham, one of the astronauts on the Apollo 1 backup crew, admitted that the pilots were realistic about the possibilities of disasters. “We figured at some point we’d lose a crew, then learn from it and fix things and go on,” he told the hushed auditorium. NASA certainly did so as a consequence of Apollo 1, but as the symposium stressed, somehow it hadn’t figured out how to maintain the fixes in the organizational charts and in the minds of all of its workers, because periodically it had to relearn the same lessons at the same lamentable cost. Emotionally impactful events such as those held in memory of Apollo 1’s fallen astronauts may represent some of the best chances to avoid forgetting those lessons.

Michael F. Tompsett taking a selfie with Eric R. Fossum and Nobukazu Teranishi

Four IEEE Fellows Share Queen Elizabeth Prize for Digital Cameras

Four engineers and IEEE Fellows credited with major innovations in image sensors have won the £1 million Queen Elizabeth Prize for Engineering. Nobel Prize winner George E. Smith and Michael F. Tompsett won for the charge coupled device (CCD) imager, sharing the prize with Nobukazu Teranishi who improved on the CCD by inventing the pinned photodiode and with Eric R. Fossum who invented the CMOS imager, the technology that succeeded the CCD in most applications and allowed for the proliferation of cameras in smartphones and other mobile devices.

In a CCD, each pixel is a potential well. Light falling on the pixel converts to charge by the photoelectric effect. The charge accumulates in the well and is then pushed from well to well until it reaches circuitry that measures the amount of charge and produces a digital representation of it.

The award must be particularly sweet for Tompsett, who missed out on the 2009 Nobel Prize for the CCD imager. At the time of the invention in 1969, Smith was Tompsett’s boss at Bell Laboratories. Smith and the late Willard Boyle came up with the CCD while trying to invent a new kind of memory. Smith has said the invention’s application in imaging was immediately obvious, but it was Tompsett who actually made it happen.

Read More
An experimental setup shows a close-up of an infrared laser as it travels through a lens

Data Centers of the Future Could Send Bits Over Infrared Lasers Instead of Fiber Optic Cables

Go ahead and rip out the hundreds of fiber optic cables that snake between server racks at the nearest data center. Replace them with infrared lasers mounted to the tops of each server rack. Use these lasers to beam data to photoreceptors mounted to more racks across the room. Then, add tiny moveable mirrors to redirect the beams and reconfigure the whole system in an instant.

That’s the vision of Mohsen Kavehrad, a professor of electrical engineering at Penn State. So far, he has built a prototype of one such data link in his lab. Infrared wavelengths are already commonly used to transmit data within fiber optic cables, but now Kavehrad wants to use them to send data over the air. With his system, he has shown that infrared lasers can deliver data rates of 10 gigabits per second.

Kavehrad has published multiple studies on this approach and presented his research today at the Photonics West conference in San Francisco.

Someday, he hopes such lasers could replace the heavy bundles of fiber optic cables found in modern data centers. “If you visit one of these data centers, it's like a jungle, with fiber going between servers,” he says.

In the demonstration system, Kavehrad used lasers to generate an infrared signal with a wavelength of 1550 nanometers, which is a common wavelength used in fiber optic cables. That signal underwent wavelength division multiplexing, which is a technique that packs more signals with multiple wavelengths onto a single laser beam. Then, he sent the beam through an inexpensive lens.

About 15 meters away, he set up another lens and several photodiode receivers. To make the beam steerable, Kavehrad added tiny mirrors, just 2 millimeters in diameter, powered by MEMS, or microelectromechanical systems. The link is bidirectional, which means both ends can send and receive data.

In addition to the infrared signal, the group also broadcast a TV signal using the same setup. They generated it by feeding the entire 1 gigahertz cable TV band into their multiplexer, so it rode along on the same laser beam as the rest of the data. At the other end, they set up an LED TV to show the working channels.

Depending on how many of these links were installed in a data center, Kavehrad thinks his approach could deliver bandwidth and throughput that is as good or better than the fiber optic cables, routers, and switches used today. He says an infrared system should easily be able to handle terabytes of data, given modern improvements in lasers and photodetectors. Kavehrad also hopes the mirrors will allow operators to more quickly respond to fluctuations in demand, and improve the efficiency of these large operations.

Data centers in the U.S. account for about 2 percent of total electricity consumed in the country. Much of that electricity is spent cooling the 400,000 or so servers they contain. Since data centers are built for maximum demand, roughly 30 percent of these servers are idle at any given time. That means a lot of energy is spent cooling servers that aren’t even running.

Kavehrad thinks infrared lasers could allow operators to more easily reconfigure server racks so that all the servers that need cooling are in one area, rather than spread out all over the data center. It’s not yet clear how much electricity this might save, or whether the cost of installing lasers would outstrip the savings. His prototype cost about $20,000 to build, though he expects equipment costs would quickly drop if major companies showed an interest, and if integrated electronics continues to see advances.

Jonathan Koomey, a consultant who has researched the energy efficiency of data centers, says it’s hard to know if Kavehrad’s idea will catch on with Google or Netflix, but suggests it could find a smaller market somewhere, perhaps with supercomputers. “Even if it's not something that finds broad use, maybe there are some narrow niche applications that could be critically important,” he says.

Before testing infrared, Kavehrad and his collaborators at Stony Brook University and Carnegie Mellon University wanted to see if high-frequency millimeter waves could instead be a replacement for cables. These waves fall between infrared and conventional radio waves on the electromagnetic spectrum. Unfortunately, the millimeter waves in their tests attenuated, or lost strength, when transmitted over just 10 meters, and “the interference was a killer,” he says.

Once they switched to infrared, the team played it safe and purchased an amplifier so they could boost the signal’s strength. But found they didn’t need it. Instead, they actually had to weaken the infrared signal once it arrived at the receiver because it was too strong for their equipment to handle. “If you have to actually attenuate [the signal] to go into receivers, that means you're in very good shape,” Kavehrad says.

Still, there are other issues that Kavehrad needs to work out. Just as a laptop hums when it’s working hard, server racks vibrate as they process and transfer data. Kavehrad’s team is concerned this vibration might affect the accuracy of their lasers. “If you start vibrating light that's really focused, that’s going to cause a severe loss of data,” he says.

Editor’s note: This post was updated on 13 February. 

A chart showing employer preferences for degrees from different types of campus-based universities and online.

What Do Employers Really Think About Online Degrees?

This is part of a series on MOOC and online learning

In a 2012 poll of U.S. employers, respondents were asked which types of colleges they preferred to hire from. The results were unambiguous: Company executives and hiring managers considered online colleges inferior to every type of on-campus college. They even preferred for-profit colleges to online colleges, despite the shady track record of many for-profit schools.

The curious thing about the survey is not the result, but the way the question was posed. Executives were asked to evaluate a variety of on-campus programs—“flagship public,” “private non-profit,” and so on. In contrast, only one choice was given for digital education: online. The unstated assumption is that online learning comes in just one flavor—plain vanilla—while on-campus offerings are far richer—caramel fudge swirl, mint chocolate chip, rum raisin, take your pick. Instead of teasing out insightful responses, the question encouraged respondents to fall back on bias.

Read More
A view of demonstrators marching on Pennsylvania Avenue during the Women's March on Washington on January 21, 2017 in Washington, DC. Photo: Noam Galai/WireImage/Getty Images

AI Could Transform the Science of Counting Crowds

The Trump administration’s controversial attempt to declare its recent presidential inauguration as having “the largest audience to witness an inauguration, period,” has inadvertently highlighted the fact that counting crowds remains a painstaking and inexact science. But the rise of artificial intelligence could soon spare crowd scientists the task of manually counting heads.

Read More
An illustration of a small two-camera rover on the moon

5 Teams Move Forward in Google Lunar XPrize Moon Race

Five teams have advanced to the last stage of the Google Lunar XPrize, prize administrators announced today. This year, the groups will race to ready their spacecraft for missions to land on the moon, move at least 500 meters across the surface, and transmit images and high-definition video back to Earth. The first to do so will claim the top prize: US $20 million.

Read More
A close-up of a chip-sized terahertz modulator shows the path a wave travels through it.

Tiny Terahertz Modulator Packs a Big Data Punch

Researchers covet terahertz waves for their ability to deliver data wirelessly at rates as high as 100 gigabits per second. That’s an unbelievably fast rate to achieve over the air, especially when you consider that: a) the average U.S. broadband speed is 55 megabits per second; and b) broadband service is piped into homes on fiber optic cables. For decades, academics have tried to develop terahertz-capable components to bring these exceedingly fast speeds to all of us.     

Last week, researchers at Tufts University showed off a new terahertz modulator that is the first such device to fit on a chip. This modulator, like any modulator found in any wireless device, can alter the amplitude of a wave during its transmission to encode data. But this new modulator can do so at the impossibly fast speeds required for waves found in the terahertz band.  

The Tufts group, led by Sameer Sonkusale, a computer engineering professor, note that larger terahertz modulators have been shown to work in open space, with a mechanical chopper that simply blocks a terahertz wave or permits it to pass by to indicate an “off” or “on” status, But device manufacturers would need a miniaturized, on-chip version to bring ultra-fast terahertz service to future smartphones and tablets.

On the electromagnetic spectrum, terahertz waves fall between the microwaves used by electronic devices and the optical waves that comprise visible light. Their frequencies are far above those of the signals used for cellular phones (which in the U.S. include 800 megahertz and 1.9 gigahertz), and for WiFi (which operates at 2.4 GHz). These lower, everyday frequencies are defined as “ultra high frequency” by the International Telecommunication Union. Lately, carriers have also been eyeing “extremely high frequency” millimeter waves, which fall roughly between 30 and 300 GHz, for their potential to bring data rates approaching 1 Gb/s to future 5G networks.

Research into the terahertz band, whose waves are classified as infrared, aims to move us even further along the electromagnetic spectrum. Terahertz waves are broadcast at a range of frequencies between 300 gigahertz and 3 THz. For their first demo, the Tufts researchers broadcast at 0.22 THz (220 GHz) to 0.325 THz (325 GHz), but say the modulator is capable of operating at frequencies up to 1 THz.

The new modulator, which is only 100 micrometers long, consists of two gold wires that serve as a waveguide. A bed of two-dimensional electron gas rests beneath the wires to dampen the wave as it travels by. By altering the number of electrons in this bed, the researchers can control the degree to which the terahertz waves (generated by a pair of lasers and a photon mixer) are absorbed by the electron gas. By absorbing more or less, they can modulate the waves as they travel across the wires.  

With this setup, the Tufts team achieved minimum data rates of 28 Gb/s in the terahertz band. At that rate, Sonkusale says a user could download a thousand high-definition videos in a fraction of a second. They reached this rate using on-off keying, one of the simplest types of signal modification. As the name implies, it communicates data through the presence or absence of a signal. More sophisticated modulation techniques could deliver even higher rates.

The group also reported an intensity modulation of 96 percent. This means the modulator could create two states (“on” and “off,” for example) with a separation of up to 96 percent to distinguish between them. A higher modulation index typically leads to fewer errors during transmission, because it makes it easier for a detector to tell the difference between signals.

“The modulation index is so high that it literally behaves as a terahertz switch,” Sonkusale says. “You can turn it on—and you get 96 percent transmission and you turn it off—and it completely blocks transmission.”

Of course, a modulator is only one component in a communications system—to build a new cell network that could handle terahertz waves, researchers would also need to build terahertz-capable antennas and receivers, as well as powerful sources to generate the waves in the first place.

One problem with terahertz waves: They are very easily absorbed by many materials as they propagate through the air, which makes it difficult to rely on them for long-distance communication. “Almost everything absorbs the terahertz wave,” says Sonkusale.

For that reason, Sonkusale thinks terahertz waves will first be used to send messages between devices within fairly short range of each other. Such a setup would be similar to Bluetooth, but with much faster data rates. “Imagine your Blu-Ray is connected to your TV without a wire,” he says.  

Sebastian Thrun, MOOC pioneer

Can MOOCs Cure the Tuition Epidemic?

This is part of a series on MOOC and online learning

In the United States, the biggest obstacle standing in the way of students going to college is not their high-school grades or their test scores. It’s cash. For low-income applicants especially, soaring tuition is one of the main reasons that as many as 40 percent of those accepted in the spring don’t show up in the fall. According to the College Board, for the 2016-17 school year, average tuition and fees are about US $33,500 at private colleges, $9,650 for state residents at public colleges, and $25,000 for out-of-state residents. To these charges, add about $10,000 a year for room and board. In general, tuition has been rising far faster than the cost of living. Going to college can be like buying a new Mercedes every year.

How did this affliction invade our universities? While there are many causes, two factors account for much of it. For public schools, the answer is simple: State legislatures have savaged higher education over the last decade. Previously, up to two-thirds of a state college’s budget came from the state; now only about half of the budget does. Students and their families pick up much of the difference.

The bloating of academic bureaucracy is another cause. While the number of new students and faculty hasn’t grown much, non-academic staff at colleges and universities has ballooned—more than doubling in a generation. What’s more, compensation for administrators is often far higher than faculty pay.

One painful result of the tuition epidemic is that U.S. student debt is now at an all-time peak of more than a trillion dollars. According to the Project on Student Debt, the average college student graduating in 2015 carried debt of about $30,000 [PDF]; in 2004, the average was a more manageable $18,550.

Under stress, most schools have opted for cost-cutting. Perhaps most troubling, they have replaced full-time tenured faculty with cheaper adjuncts. According to a 2013 report, full-time tenured and tenure-track positions at U.S. universities fell from 78 percent of teaching staff in 1969 to just 33 percent in 2009. Over the same period, the share of non-tenured positions in the teaching staff more than tripled. On average, adjuncts earn just a third of what tenured faculty make and have no job security and few benefits; in a sense, they were the bellwethers of the gig economy.

Despite these and other cuts, tuitions continued to climb. So when the first MOOCs—massive open online courses—appeared on the scene, they seemed like the answer to a prayer. As described in my previous post, MOOCs made it possible to reach many thousands of students at a time, so long as they had a computer and a robust Internet connection capable of video streaming. Sebastian Thrun, a MOOC pioneer, famously suggested that the technology would so disrupt education that in 50 years, only 10 institutions of higher learning would remain.

Read More
System captures Mach cone from laser pulse

Superfast Camera Sees Shock Wave From Light

A camera system that captures a snapshot of overlapping light waves in a tiny fraction of a second could lead to new methods for imaging, allowing scientists to watch the brain’s neurons interacting or see neutrinos colliding with matter.

Read More
An illustration of a key lock surrounded by electrical cords to represent net neutrality.

Is Net Neutrality Good or Bad for Innovation?

Update: On 18 May 2017, the Federal Communications Commission voted in favor of a proposal by Chairman Ajit Pai to review existing rules around net neutrality. Specifically, the proposal would eliminate Title II of the Communications Act, which classifies Internet service providers as common carriers. By classifying ISPs as common carriers, Title II subjects ISPs to rules such as those that prohibit carriers from speeding up or slowing down specific content on their networks. The 2-1 vote begins the process required to change that classification, and also triggers a 90-day review that includes a public comment period. Later this year, commissioners will cast their final votes. The proposal's ultimate goal is to relax regulations that Pai says have prohibited ISPs from charging extra for delivering content that requires more bandwidth. It was largely based on an economic argument that such restrictions have hindered investment across the telecom industry. Opponents have said that removing regulations would harm content producers such as Netflix and Google, and small businesses. 

The Trump administration is just days away from taking the reins in Washington, D.C. and many technologists are wondering what its arrival will mean for net neutrality. The new leadership is flanked by advisors who have opposed net neutrality in the past, suggesting that U.S. policy could be challenged or even reversed in the year ahead.

The Obama administration made the strong defense of net neutrality a hallmark of its Internet policy, based on the belief that preserving it benefits consumers and promotes innovation. Now, Trump’s administration is likely to argue the opposite, and say that tossing out net neutrality is really what’s best for consumers, companies, and the U.S. economy.

Major tech companies have also lined up on opposing sides of the debate. Google, Apple, Amazon, and Netflix support net neutrality, while Internet service providers (ISPs) including Comcast, Verizon, and AT&T have long railed against it. Everyone argues that their position will spur innovation and economic growth. So who’s right?

Unfortunately, there is no clear answer. Economists have done plenty of modeling on net neutrality over the past eight years, but there isn’t a strong consensus about whether keeping it or throwing it out would be best for consumers, innovation, or the economy. “Nobody has much data,” admits Gerald Faulhaber, an economist and professor emeritus at the University of Pennsylvania who has done his own review on the matter.

To better understand the most compelling arguments about how changing U.S. policy on net neutrality might boost innovation, or kill it, IEEE Spectrum spoke with Nicholas Economides, an economist at New York University who supports net neutrality, and Faulhaber, who generally opposes it.

Let’s start by defining our terms. As you may recall, net neutrality is a regrettably bland term for an important idea: ISPs such as Verizon and Comcast ought to deliver all online content to consumers in the same way, without granting preferential treatment to any particular content.

While it’s worth noting that this concept can be interpreted in many ways, it often means prohibiting ISPs from asking content producers, including Netflix and Google, to pay a higher rate so that their content is delivered at faster speeds.

From the perspective of ISPs, not all content is created equal—if millions of Comcast customers want to live stream the Super Bowl from NBC, that’s going to require a lot more bandwidth and network resources from Comcast than if they were all trying to load a simple website. ISPs argue that allowing them to charge content creators (such as NBC) based on how much bandwidth they consume, or how fast their content must be delivered, is more fair.

Clearly, this policy would also generate extra revenue for ISPs, which they say they will reinvest back into their own networks. This argument is tempting because right now, many ISPs have a near-monopoly in their service areas with little incentive to improve their service (a situation that has led to notoriously low customer satisfaction). If they could charge clients more to move content, ISPs may be more motivated to develop faster service.   

Skeptics say that ISPs won’t spend very much money on achieving faster speeds when they could just slow down service for any content producer who doesn’t fork over enough cash. “What matters to consumers is relative speed,” says Economides. “Being able to delay things creates artificial scarcity, for which content providers are willing to pay.”

Economides also says investors will want any profits returned to them, rather than ploughed into infrastructure. “As soon as AT&T makes money from violating net neutrality, there will be tremendous pressure to give that money to the shareholders,” he says.

Meanwhile, net neutrality supporters say the real innovation at stake is that which comes from content producers such as Netflix, Google, and Facebook. If content producers have to pay ISPs in order for their content to be delivered quickly to customers, that money is not being invested back into those companies.

There’s also a lot of concern for the little guys—small content producers who may not be able to afford the rates ISPs will charge for faster delivery. Critics fear that booting net neutrality will create a “two-tiered” Internet wherein wealthy companies pay for content to be delivered at high speeds, while content from startups or small web publishers languishes in the slow lane.

“The greatest threat to innovation is if new companies, innovative companies, have to pay a lot to be on the same playing field as everybody else,” Economides says. Net neutrality supporters worry there might be secondary effects from limiting the free flow of ideas and information online.

In a sense, though, we already have a two-tiered Internet. Wealthy companies can pay for web designers and software engineers to program sites that load faster and respond more quickly to commands. And a handful of publishers already own most of the sites that we spend the majority of our online time perusing.

Faulhaber doesn’t think eliminating net neutrality would deal a devastating blow to small content producers. He compares tiers of Internet speeds to the various delivery services offered by a post office. One company can pay a little extra for expedited service while others who don’t need rushed delivery simply pay the standard rate.

“We've heard this about how only large firms will be able to do it, but if you look at the way the economy works—that’s not true at all,” he says. “The only people who will buy it are those that need it, and if you’re a small firm and you need it, you’ll buy it.”

In fact, Faulhaber, thinks new services could pop up—such as in robotic surgery or gaming—if content producers were guaranteed access to a high speed lane. “I don’t see this as removing opportunities, I see this as creating opportunities for innovation,” he says.

Right now, there are still a lot of unknowns about how a post–net neutrality world would actually operate. It’s not clear how much faster or slower content might be delivered, or what fees an ISP would charge for each service. Given those gaps in knowledge, it’s hard to know how these expenses might impact small or large businesses, and affect ISPs or content producers.

And if net neutrality disappears, what might that mean for consumers? Well, if ISPs could charge content producers more to cover the expense of maintaining their network, they may charge consumers less for home service. Of course, content producers could also wind up passing along the cost of the extra fees they must now pay to consumers, zeroing out any cost savings from ISPs.

While a drop in prices for consumers is theoretically possible, no one knows if it would actually happen in a post–net neutrality world. “There’s no evidence that if net neutrality were violated today, the price would fall,” Economides says. “There could be some theoretical model in which case it would happen, but there are also other models where it doesn’t happen, so there is no bottom line on that.”

Consumers may also benefit from “zero rating,” a perk that some ISPs have already begun to offer. With zero rating, an ISP forgives any data that a customer spends on specific content. For example, AT&T said in September that it would not count programs streamed through its own DirecTV Now service against customers’ monthly data limit, much to the FCC’s chagrin.

Clearly, net neutrality is a complicated issue with many factors at play that can be tweaked to produce both positive and negative scenarios. To try to sum up all of these variables and their potential outcomes, Economides did an analysis in 2015 to search for the “total surplus” in the market for Internet service. Total surplus is an economic term that identifies the conditions that will bring the greatest benefit to both producers and consumers.

In Economides’ case, he tried to figure out whether keeping or ditching net neutrality would bring us closer to total surplus, by factoring in the profits of ISPs, the profits of content producers, and the benefits to consumers. In the end, he found that sticking to net neutrality “tends to maximize total surplus” for society.

With that in mind, he also thinks preserving net neutrality would promote innovation and economic development across the most businesses. “If you’re a Republican and you believe you should support business, then you should be pro–net neutrality because most businesses benefit from net neutrality,” he says. “The only businesses that benefit from violations of net neutrality are ISPs. If you think about it, 95 percent of businesses in the U.S. are not ISPs.”

Faulhaber, of course, sees any impact from doing away with net neutrality as simply another reasonable cost of doing business. “Just like we see in other businesses that offer different qualities of service, you get to choose what you want, and that’s fine,” he says. “This is standard in the economy.”

So what will actually happen with the new administration? For the past few years, net neutrality was preserved by the U.S. Federal Communications Commission under Obama’s watch. Now, though, Trump could change all that. His transition team includes several opponents of net neutrality, including Vice President–elect Mike Pence and Rep. Marsha Blackburn. And the trio he selected to coordinate the handoff of the FCC to his new administration are all against it.

In the coming weeks, Trump will also have the opportunity to appoint three new commissioners to lead the FCC, including a Chair. His new appointees will join two current Republican members—Ajit Pai and Michael O’Reilly—who have both criticized net neutrality. The FCC is an independent federal agency, which means it’s overseen by Congress, which is also now under Republican control.

That said, anyone who tries to reverse U.S. policy on net neutrality will likely have a difficult road ahead. Public sentiment will not be on their side—when the FCC solicited public comments on the issue in 2014, they received a record 3.7 million comments, with the vast majority in favor of net neutrality.

Discarding it, Economides says, may not be as simple as net neutrality opponents would like to believe. “I’m not saying it’s not going to happen; I’m just saying it’s not going to be easy,” he says.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More