Tech Talk iconTech Talk

Sanyogita Shamsunder, Verizon's director of network planning, is shown standing outside near equipment used to test new base station technology.

Profile: Sanyogita Shamsunder, the “Problem Solver” Behind Verizon’s 5G Network

Ask anyone in telecom and they’ll tell you that Verizon has been the most aggressive of any U.S. company in forging ahead on 5G, the highly anticipated wireless network of the future. Last year, Verizon established a technical forum dedicated to hurrying along its development, and became the first U.S. company to promise a commercial deployment in 2017.

Critics have warned Verizon about upsetting the apple cart of international standards-making for 5G, a formal process that isn’t scheduled to conclude until 2020. But Verizon has insisted that its 5G network will be ready to deliver fixed wireless service (that which is delivered between two stationary points, such as a base station and a rooftop antenna) to customers this year.

The future of that network is largely in the hands of Sanyogita Shamsunder, Verizon’s director of network planning. She leads the team of 15 engineers who are crunching data from early trials, weighing potential business models, and generally laying the groundwork for the company’s ambitious 5G plans.

Shamsunder, who works at Verizon’s operations headquarters in Basking Ridge, N.J., began her engineering career in the mid-1990s, just as the wireless industry was starting to take off. A decade later, she successfully led Verizon’s rollout of LTE, for which she drafted the technical specifications that smartphone manufacturers used to make sure their devices functioned on Verizon’s network.

That experience made her the obvious choice when the company needed someone to steer its massive network to the promised land of 5G. Today, her job consists of managing Verizon’s team of 5G network planners, which largely consists of engineers and technologists—a leadership role for which her own technical background hadn’t specifically prepared her. As a fellow engineer, she focuses on assigning her team to high-level problems and helping them find solutions.

In her managerial role, Shamsunder often finds she has to nudge her group to make decisions and remind them to take more risks. “They like to lay out all the cases and say, ‘You decide,’ ” she says. “I think when you're working at that level, you need to be able to make decisions. I think many engineers have a difficult time doing that.”

Shamsunder hasn’t always envisioned herself in an executive role. She grew up in the city of Hyderabad, India, and earned her undergraduate degree in electrical engineering and telecom from nearby Osmania University. With it, she became the lone engineer in a family of doctors. That meant “no one could help me with my math,” she jokes.

She thought about taking a job in the industry right away, but instead landed at the University of Virginia, where she completed her Ph.D. in electrical engineering and wrote a thesis on signal processing. “I loved the mathematics behind communications and signal processing in general,” she says.

After spending a few years teaching courses on signal processing as an assistant professor at Colorado State, Shamsunder found her first job in the telecom industry. She became a senior engineer at Stanford Telecommunications, a company that made components for cable modems and TV set-top boxes.

Today, her time at Stanford still stands out as the pivotal experience that persuaded her to abandon the academic world for good. “It was a place where you could apply some of the things you learned in your Ph.D. to cool, practical problems, and that's what really got me interested,” she says.

After her stint at Stanford, Shamsunder switched to working on base stations for Lucent (a telecom company that has since merged with Alcatel and been acquired by Nokia), and later became a principal engineer at a startup called Sandbridge Technologies. At Sandbridge, she built software-defined radio for mobile phones.

During her five-year tenure at Sandbridge, she found herself increasingly involved in discussions about the customer value proposition of specific products. She gradually became more interested in the broader business, beyond her own projects. “There's a lot of good technology around today, but then the business model makes it very difficult to be successful,” she says. “I think it's very important to understand that.”

After a brief stint developing hardware platforms for mobile devices at LinQuest, a semiconductor company, Shamsunder joined Verizon in 2007 as a director in charge of the company’s wireless and technology strategy.

Her first task was to build a team of people from scratch to work with Nokia, Ericsson, Intel, and Samsung on the launch of LTE. Her team’s job was to make sure the devices that manufacturers built would run on Verizon’s network. She led that project for three years, and Verizon’s launch of LTE in 2010 was her proudest professional moment.

Shortly after joining Verizon, Shamsunder also set out to earn her Executive MBA at the Wharton School at the University of Pennsylvania. She wanted to learn how to position products, manage a team, and conduct consumer research. That meant she woke up at 6 a.m., every other Friday, to drive to Philadelphia for two full days of stacked courses. On Saturday night, she returned home to her husband and two young kids.

One of her most memorable lessons from Wharton came as Shamsunder was sitting in the classroom when the iPhone launched in 2007. At that time, AT&T was the only carrier to support it. “All my classmates were like, Why don’t you have this?” she says.

She’d prefer to avoid such questions with 5G. For the past year and a half, her team has coordinated research, development, and testing of several technologies that could bring faster data speeds and lower latency to both base stations and devices.

So far, high-frequency millimeter waves appear to be the leading candidate, as Verizon plans to use them to deploy fixed wireless 5G service this year. “I think fixed wireless is a great use case for us, and for the industry in general, because you can test all the elements in a more controlled environment where there's very limited mobility,” she says. “What we've seen so far doesn't give us any pause to stop and question this.”

But there’s two sides to every coin. Along with the thrill of 5G and the privilege of shaping Verizon’s future network also comes a tremendous amount of pressure. But Shamsunder prefers it that way, and always has, from her first days in the budding wireless industry. “I’m a problem solver; I’m an engineer at heart,” she says. “I like challenges, and it's more fun to go into uncharted territory.”

Three elderly male panelists and one younger female moderator sit on a stage above an audience. Another eldery man can be seen on a large video screen.

Avoiding Future Disasters and NASA's Memory Problem

50 years ago, on January 27th, 1967, three astronauts climbed into an Apollo capsule perched atop a Saturn 1B, the smaller cousin of the Saturn V that would be later used to send astronauts to the moon. The three astronauts—Gus Grissom, a Mercury program veteran, Ed White, the first American to walk in space, and Roger Chaffee, a spaceflight rookie—were not planning on going anywhere. They were doing a test: the goal was to simply operate the spacecraft while disconnected from ground support equipment as if it was in orbit, not just sitting on a launch pad at Kennedy Space Center in Florida. The capsule was sealed up, and the astronauts began working through the test procedures. A few hours later, an electrical fire broke out and killed the crew before they could escape the capsule.

Last week, NASA held many commemorations for the anniversary of the Apollo 1 fire. But a forward-looking event at the astronaut base at the Johnson Space Center in Houston stands out as particularly apposite. In particular, a panel of emeritus experts discussed what space workers must stop forgetting about what the Apollo 1 fire—and the subsequent 1986 Challenger and 2003 Columbia space shuttle disasters—has to teach.

The veteran program workers discussed their insights in front of a packed house, and the emcee—a freshly minted astronaut from the class of 2012—drove the need for such reminders home with a simple request. After asking those in the audience who had worked on Apollo to rise (about 5 percent did, to applause) she asked for those who had come to work after 2003 (and so hadn’t been present for any of the disasters) to rise next.  Almost half of the gathering did so.

Although the immediate source of disaster was different in each case—a fire in a cabin filled with pure oxygen for Apollo 1, a cracked O-ring in a booster for Challenger, and an insulating foam strike on a heat shield for Columbia—“The commonality of the causes of all three catastrophes is sobering,” said panelist Gary Johnson.

Johnson is a retired safety expert who, as a 27-year-old electrical engineer in 1967, had been thrown into the heart of the Apollo 1 fire investigation. He had been the only electrical specialist at the console in the control center in Houston during the routine test, had noticed a sudden “Main Bus A/B” alarm light, then heard the shouts of ‘Fire!’ Within minutes, Johnson recalled, the control room doors were locked, those present were given one phone call to tell their families they’d not be home that night, and the teams plunged into capturing all of the data that had been flowing to Houston from the test up to the moment of the catastrophe.

Within days Johnson was crawling around inside the burnt-out capsule in Florida, examining the remains of cable trays and other wiring. He also was meticulously poring over the close-out photos of the cabin prior to the test run, identifying frayed or even dangling insulation on cabling. And he helped set up test fires in a simulated capsule with wiring matching what he saw had been inside Apollo-1, in the same high oxygen environment—and remembers being shocked by the ferocity of the flames that a single spark could trigger. 

Johnson described how the fundamental design change to the Apollo spacecraft that was made in the wake of the fire—aside from a quick opening hatch and the decision to never to fill the cabin at full pressure with pure oxygen—was installing secure cable trays and conduits to prevent chaffing of the insulation around wires. “Gemini [spacecraft] were constructed with all the wiring outside the crew cabin,” he recalled, “but in Apollo the new contractor ran wiring bundles all over the walls and floor of the spacecraft, wrapped in taped-on insulation bought at a local hardware store.” The wires were supposedly protected by temporary panels installed for maintenance, but it was haphazard at best. Grimly, post-fire analysis found too many potential sparking sites to actually even guess which one had been the fire starter. 

For the Apollo 1 fire, it was clear that the kind of tests that Johnson had performed after the fatal disaster should have been performed by any prudent design team before the astronauts climbed into the capsule. The “assumption of goodness”—the feeling that “it’ll be OK”—had become a rationalization for skipping such tests under the pressure of dominant goals, such as schedules.

Similar testing to challenge any assumption of goodness was also skipped in the lead-up to the two shuttle disasters which also were commemorated with events last week: the anniversary of the destruction of Challenger and its seven-person crew is January 28, while the anniversary of the loss of Columbia, with seven more astronauts, is February 1. Consequently, awareness of potentially fatal flaws eluded the teams in charge of those missions, too.  

Most famously, the loss of Challenger was caused by assuming that flexible O-ring seals in the booster engines would seat properly at ignition even though the ambient temperature was lower than in the pre-flight testing range. Physicist Richard Feynman, a member of the investigation team, performed a simple experiment with a bucket of ice and a sample of the material to show that the assumption—which a shuttle team member had questioned just before launch—was not valid.

The “too late” test that could have prevented the breakup of Columbia was conducted several months after that disaster, under the leadership of investigation team scientist Scott Hubbard. A piece of fuel tank insulation foam had (as on earlier flights) been seen to tear off the tank early in the flight and impact under the left wing’s leading edge. Using a target of a flown thermal protection system panel and a high-velocity airgun, investigators fired the foam onto the panel at the same angle and speed as occurred during the Columbia foam impact, and tore a 50-centimeter hole in the target. Pre-flight impact testing had only used simulated grain-sized space debris, but never the kind of foam that—for years—had been observed tearing free from the tanks.  

Coming up with verification tests is fundamentally a challenge in operational engineering, but another panelist—Glynn Lunney, a flight director in mission control for the near-fatal Apollo 13 lunar mission and who later played important roles during the shuttle program—stressed that giving safety teams enough authority to demand such tests and object when they weren’t thorough enough was an organizational challenge. Whenever policy backing the authority of safety teams weakened, it laid the foundations for future imprudent decisions that led to new catastrophes. Though unable to attend due to illness, Frank Borman—the Gemini and Apollo astronaut who had been in charge of the Apollo 1 investigation and the bureaucratic reforms that followed—endorsed Lunney’s insights in a prerecorded set of answers to questions.

Borman demurred when asked whether schedule pressure was a factor in omitting certain tests,  affirming his belief that setting schedules was a constructive motivation to prioritizing problems to be solved. “You really have to manage time as a resource,” Lunney explained. “Big and small things come at you, prioritization of attention is what you have to be tuned into,” he added. Two decades later, after the Challenger was lost, the question of schedule-induced carelessness again came up, but rather than prioritizing problems, investigators found the pressure to fly was based on the need to impress Congress with the shuttle’s timeliness in order to convince them to use the shuttle for all satellite launches, rather than funding alternative rockets for military launches. 

Walt Cunningham, one of the astronauts on the Apollo 1 backup crew, admitted that the pilots were realistic about the possibilities of disasters. “We figured at some point we’d lose a crew, then learn from it and fix things and go on,” he told the hushed auditorium. NASA certainly did so as a consequence of Apollo 1, but as the symposium stressed, somehow it hadn’t figured out how to maintain the fixes in the organizational charts and in the minds of all of its workers, because periodically it had to relearn the same lessons at the same lamentable cost. Emotionally impactful events such as those held in memory of Apollo 1’s fallen astronauts may represent some of the best chances to avoid forgetting those lessons.

Michael F. Tompsett taking a selfie with Eric R. Fossum and Nobukazu Teranishi

Four IEEE Fellows Share Queen Elizabeth Prize for Digital Cameras

Four engineers and IEEE Fellows credited with major innovations in image sensors have won the £1 million Queen Elizabeth Prize for Engineering. Nobel Prize winner George E. Smith and Michael F. Tompsett won for the charge coupled device (CCD) imager, sharing the prize with Nobukazu Teranishi who improved on the CCD by inventing the pinned photodiode and with Eric R. Fossum who invented the CMOS imager, the technology that succeeded the CCD in most applications and allowed for the proliferation of cameras in smartphones and other mobile devices.

In a CCD, each pixel is a potential well. Light falling on the pixel converts to charge by the photoelectric effect. The charge accumulates in the well and is then pushed from well to well until it reaches circuitry that measures the amount of charge and produces a digital representation of it.

The award must be particularly sweet for Tompsett, who missed out on the 2009 Nobel Prize for the CCD imager. At the time of the invention in 1969, Smith was Tompsett’s boss at Bell Laboratories. Smith and the late Willard Boyle came up with the CCD while trying to invent a new kind of memory. Smith has said the invention’s application in imaging was immediately obvious, but it was Tompsett who actually made it happen.

Read More
An experimental setup shows a close-up of an infrared laser as it travels through a lens

Data Centers of the Future Could Send Bits Over Infrared Lasers Instead of Fiber Optic Cables

Go ahead and rip out the hundreds of fiber optic cables that snake between server racks at the nearest data center. Replace them with infrared lasers mounted to the tops of each server rack. Use these lasers to beam data to photoreceptors mounted to more racks across the room. Then, add tiny moveable mirrors to redirect the beams and reconfigure the whole system in an instant.

That’s the vision of Mohsen Kavehrad, a professor of electrical engineering at Penn State. So far, he has built a prototype of one such data link in his lab. Infrared wavelengths are already commonly used to transmit data within fiber optic cables, but now Kavehrad wants to use them to send data over the air. With his system, he has shown that infrared lasers can deliver data rates of 10 gigabits per second.

Kavehrad has published multiple studies on this approach and presented his research today at the Photonics West conference in San Francisco.

Someday, he hopes such lasers could replace the heavy bundles of fiber optic cables found in modern data centers. “If you visit one of these data centers, it's like a jungle, with fiber going between servers,” he says.

In the demonstration system, Kavehrad used lasers to generate an infrared signal with a wavelength of 1550 nanometers, which is a common wavelength used in fiber optic cables. That signal underwent wavelength division multiplexing, which is a technique that packs more signals with multiple wavelengths onto a single laser beam. Then, he sent the beam through an inexpensive lens.

About 15 meters away, he set up another lens and several photodiode receivers. To make the beam steerable, Kavehrad added tiny mirrors, just 2 millimeters in diameter, powered by MEMS, or microelectromechanical systems. The link is bidirectional, which means both ends can send and receive data.

In addition to the infrared signal, the group also broadcast a TV signal using the same setup. They generated it by feeding the entire 1 gigahertz cable TV band into their multiplexer, so it rode along on the same laser beam as the rest of the data. At the other end, they set up an LED TV to show the working channels.

Depending on how many of these links were installed in a data center, Kavehrad thinks his approach could deliver bandwidth and throughput that is as good or better than the fiber optic cables, routers, and switches used today. He says an infrared system should easily be able to handle terabytes of data, given modern improvements in lasers and photodetectors. Kavehrad also hopes the mirrors will allow operators to more quickly respond to fluctuations in demand, and improve the efficiency of these large operations.

Data centers in the U.S. account for about 2 percent of total electricity consumed in the country. Much of that electricity is spent cooling the 400,000 or so servers they contain. Since data centers are built for maximum demand, roughly 30 percent of these servers are idle at any given time. That means a lot of energy is spent cooling servers that aren’t even running.

Kavehrad thinks infrared lasers could allow operators to more easily reconfigure server racks so that all the servers that need cooling are in one area, rather than spread out all over the data center. It’s not yet clear how much electricity this might save, or whether the cost of installing lasers would outstrip the savings. His prototype cost about $20,000 to build, though he expects equipment costs would quickly drop if major companies showed an interest, and if integrated electronics continues to see advances.

Jonathan Koomey, a consultant who has researched the energy efficiency of data centers, says it’s hard to know if Kavehrad’s idea will catch on with Google or Netflix, but suggests it could find a smaller market somewhere, perhaps with supercomputers. “Even if it's not something that finds broad use, maybe there are some narrow niche applications that could be critically important,” he says.

Before testing infrared, Kavehrad and his collaborators at Stony Brook University and Carnegie Mellon University wanted to see if high-frequency millimeter waves could instead be a replacement for cables. These waves fall between infrared and conventional radio waves on the electromagnetic spectrum. Unfortunately, the millimeter waves in their tests attenuated, or lost strength, when transmitted over just 10 meters, and “the interference was a killer,” he says.

Once they switched to infrared, the team played it safe and purchased an amplifier so they could boost the signal’s strength. But found they didn’t need it. Instead, they actually had to weaken the infrared signal once it arrived at the receiver because it was too strong for their equipment to handle. “If you have to actually attenuate [the signal] to go into receivers, that means you're in very good shape,” Kavehrad says.

Still, there are other issues that Kavehrad needs to work out. Just as a laptop hums when it’s working hard, server racks vibrate as they process and transfer data. Kavehrad’s team is concerned this vibration might affect the accuracy of their lasers. “If you start vibrating light that's really focused, that’s going to cause a severe loss of data,” he says.

Editor’s note: This post was updated on 13 February. 

A chart showing employer preferences for degrees from different types of campus-based universities and online.

What Do Employers Really Think About Online Degrees?

This is part of a series on MOOC and online learning

In a 2012 poll of U.S. employers, respondents were asked which types of colleges they preferred to hire from. The results were unambiguous: Company executives and hiring managers considered online colleges inferior to every type of on-campus college. They even preferred for-profit colleges to online colleges, despite the shady track record of many for-profit schools.

The curious thing about the survey is not the result, but the way the question was posed. Executives were asked to evaluate a variety of on-campus programs—“flagship public,” “private non-profit,” and so on. In contrast, only one choice was given for digital education: online. The unstated assumption is that online learning comes in just one flavor—plain vanilla—while on-campus offerings are far richer—caramel fudge swirl, mint chocolate chip, rum raisin, take your pick. Instead of teasing out insightful responses, the question encouraged respondents to fall back on bias.

Read More
A view of demonstrators marching on Pennsylvania Avenue during the Women's March on Washington on January 21, 2017 in Washington, DC. Photo: Noam Galai/WireImage/Getty Images

AI Could Transform the Science of Counting Crowds

The Trump administration’s controversial attempt to declare its recent presidential inauguration as having “the largest audience to witness an inauguration, period,” has inadvertently highlighted the fact that counting crowds remains a painstaking and inexact science. But the rise of artificial intelligence could soon spare crowd scientists the task of manually counting heads.

Read More
An illustration of a small two-camera rover on the moon

5 Teams Move Forward in Google Lunar XPrize Moon Race

Five teams have advanced to the last stage of the Google Lunar XPrize, prize administrators announced today. This year, the groups will race to ready their spacecraft for missions to land on the moon, move at least 500 meters across the surface, and transmit images and high-definition video back to Earth. The first to do so will claim the top prize: US $20 million.

Read More
A close-up of a chip-sized terahertz modulator shows the path a wave travels through it.

Tiny Terahertz Modulator Packs a Big Data Punch

Researchers covet terahertz waves for their ability to deliver data wirelessly at rates as high as 100 gigabits per second. That’s an unbelievably fast rate to achieve over the air, especially when you consider that: a) the average U.S. broadband speed is 55 megabits per second; and b) broadband service is piped into homes on fiber optic cables. For decades, academics have tried to develop terahertz-capable components to bring these exceedingly fast speeds to all of us.     

Last week, researchers at Tufts University showed off a new terahertz modulator that is the first such device to fit on a chip. This modulator, like any modulator found in any wireless device, can alter the amplitude of a wave during its transmission to encode data. But this new modulator can do so at the impossibly fast speeds required for waves found in the terahertz band.  

The Tufts group, led by Sameer Sonkusale, a computer engineering professor, note that larger terahertz modulators have been shown to work in open space, with a mechanical chopper that simply blocks a terahertz wave or permits it to pass by to indicate an “off” or “on” status, But device manufacturers would need a miniaturized, on-chip version to bring ultra-fast terahertz service to future smartphones and tablets.

On the electromagnetic spectrum, terahertz waves fall between the microwaves used by electronic devices and the optical waves that comprise visible light. Their frequencies are far above those of the signals used for cellular phones (which in the U.S. include 800 megahertz and 1.9 gigahertz), and for WiFi (which operates at 2.4 GHz). These lower, everyday frequencies are defined as “ultra high frequency” by the International Telecommunication Union. Lately, carriers have also been eyeing “extremely high frequency” millimeter waves, which fall roughly between 30 and 300 GHz, for their potential to bring data rates approaching 1 Gb/s to future 5G networks.

Research into the terahertz band, whose waves are classified as infrared, aims to move us even further along the electromagnetic spectrum. Terahertz waves are broadcast at a range of frequencies between 300 gigahertz and 3 THz. For their first demo, the Tufts researchers broadcast at 0.22 THz (220 GHz) to 0.325 THz (325 GHz), but say the modulator is capable of operating at frequencies up to 1 THz.

The new modulator, which is only 100 micrometers long, consists of two gold wires that serve as a waveguide. A bed of two-dimensional electron gas rests beneath the wires to dampen the wave as it travels by. By altering the number of electrons in this bed, the researchers can control the degree to which the terahertz waves (generated by a pair of lasers and a photon mixer) are absorbed by the electron gas. By absorbing more or less, they can modulate the waves as they travel across the wires.  

With this setup, the Tufts team achieved minimum data rates of 28 Gb/s in the terahertz band. At that rate, Sonkusale says a user could download a thousand high-definition videos in a fraction of a second. They reached this rate using on-off keying, one of the simplest types of signal modification. As the name implies, it communicates data through the presence or absence of a signal. More sophisticated modulation techniques could deliver even higher rates.

The group also reported an intensity modulation of 96 percent. This means the modulator could create two states (“on” and “off,” for example) with a separation of up to 96 percent to distinguish between them. A higher modulation index typically leads to fewer errors during transmission, because it makes it easier for a detector to tell the difference between signals.

“The modulation index is so high that it literally behaves as a terahertz switch,” Sonkusale says. “You can turn it on—and you get 96 percent transmission and you turn it off—and it completely blocks transmission.”

Of course, a modulator is only one component in a communications system—to build a new cell network that could handle terahertz waves, researchers would also need to build terahertz-capable antennas and receivers, as well as powerful sources to generate the waves in the first place.

One problem with terahertz waves: They are very easily absorbed by many materials as they propagate through the air, which makes it difficult to rely on them for long-distance communication. “Almost everything absorbs the terahertz wave,” says Sonkusale.

For that reason, Sonkusale thinks terahertz waves will first be used to send messages between devices within fairly short range of each other. Such a setup would be similar to Bluetooth, but with much faster data rates. “Imagine your Blu-Ray is connected to your TV without a wire,” he says.  

Sebastian Thrun, MOOC pioneer

Can MOOCs Cure the Tuition Epidemic?

This is part of a series on MOOC and online learning

In the United States, the biggest obstacle standing in the way of students going to college is not their high-school grades or their test scores. It’s cash. For low-income applicants especially, soaring tuition is one of the main reasons that as many as 40 percent of those accepted in the spring don’t show up in the fall. According to the College Board, for the 2016-17 school year, average tuition and fees are about US $33,500 at private colleges, $9,650 for state residents at public colleges, and $25,000 for out-of-state residents. To these charges, add about $10,000 a year for room and board. In general, tuition has been rising far faster than the cost of living. Going to college can be like buying a new Mercedes every year.

How did this affliction invade our universities? While there are many causes, two factors account for much of it. For public schools, the answer is simple: State legislatures have savaged higher education over the last decade. Previously, up to two-thirds of a state college’s budget came from the state; now only about half of the budget does. Students and their families pick up much of the difference.

The bloating of academic bureaucracy is another cause. While the number of new students and faculty hasn’t grown much, non-academic staff at colleges and universities has ballooned—more than doubling in a generation. What’s more, compensation for administrators is often far higher than faculty pay.

One painful result of the tuition epidemic is that U.S. student debt is now at an all-time peak of more than a trillion dollars. According to the Project on Student Debt, the average college student graduating in 2015 carried debt of about $30,000 [PDF]; in 2004, the average was a more manageable $18,550.

Under stress, most schools have opted for cost-cutting. Perhaps most troubling, they have replaced full-time tenured faculty with cheaper adjuncts. According to a 2013 report, full-time tenured and tenure-track positions at U.S. universities fell from 78 percent of teaching staff in 1969 to just 33 percent in 2009. Over the same period, the share of non-tenured positions in the teaching staff more than tripled. On average, adjuncts earn just a third of what tenured faculty make and have no job security and few benefits; in a sense, they were the bellwethers of the gig economy.

Despite these and other cuts, tuitions continued to climb. So when the first MOOCs—massive open online courses—appeared on the scene, they seemed like the answer to a prayer. As described in my previous post, MOOCs made it possible to reach many thousands of students at a time, so long as they had a computer and a robust Internet connection capable of video streaming. Sebastian Thrun, a MOOC pioneer, famously suggested that the technology would so disrupt education that in 50 years, only 10 institutions of higher learning would remain.

Read More
System captures Mach cone from laser pulse

Superfast Camera Sees Shock Wave From Light

A camera system that captures a snapshot of overlapping light waves in a tiny fraction of a second could lead to new methods for imaging, allowing scientists to watch the brain’s neurons interacting or see neutrinos colliding with matter.

Read More
Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More