Tech Talk iconTech Talk

Why U.S. Companies Will Win Wearables

human os iconWearables seem to be everywhere. Whether it’s the wearables of the near future, like smart glasses, the even nearer future—smartwatches—or the ones most widely available today—activity monitors—it’s clear that the next great wave of consumer electronics will be worn on the human body. Ubiquitous gadgets studding clothing, worn on the wrist, covering the eyes, tattooed on the skin, or placed in the ear is where we're heading—and we're moving there fast.

While other regions of the world may be leading innovation in telecommunications, energy, automobiles, and biomedicine, U.S. companies, big and small, are the main innovators of wearable technology.

That’s because the innovation ecosystem in the U.S. supports entrepreneurs with big ideas. That’s been true for now-massive companies such as Google and Apple, and it’s true for a slew of emerging companies that are developing wearable technologies. One reason is schools such as MIT, Berkeley, Stanford, and Carnegie Mellon, which serve as incubators for spin-off companies that benefit from the schools' resources during the development phase. Budding engineers can create new products with little financial risk.

These academic spinoffs in the U.S., unlike their counterparts in other countries, can often retain the intellectual property rights to technology developed in university labs. The ability to retain IP can make entrepreneurs more willing to make the leap from the lab to commercialization and may also increase the value of a company to potential investors.

Once a U.S. spinoff begins the commercialization process, it typically looks not to government organizations for funding but to VCs, angel investors, and crowdfunding. It also can look to the public stock market; initial public offerings for emerging companies are on the upswing, thanks to the Jumpstart Our Business Startups (JOBS) Act. The goal of the JOBS Act, which was signed into law on 5 April 2012, was to streamline access to capital for companies of all sizes.

“After the dot com bust, the U.S. government put in place a number of new rules and regulations that were very important, but had the unintended consequence of pricing smaller companies out of the IPO process, which is key to capital formation for emerging technology companies. The passage of the JOBS Act in April 2012 reduced the regulatory burden on smaller IPO candidates, and the IPO market has responded with two straight strong years," says Livingston Securities Chairman and CEO Scott Livingston.

He pointed out that the number of IPOs in 2012, the year that President Obama signed the JOBS Act into law, was the highest number since 2006. “And by September 2013,” said Livingston, “the number of IPOs had already surpassed all of 2012.”

While we have yet to see a wearables company achieve an IPO just yet, there is a good chance that we will see one within the next few years.

Meanwhile, crowdfunding seems to have worked particularly well in wearables, where it has brought individual investors and entrepreneurs together, sometimes at breakneck speed. Just look at Pebble. As the most successful Kickstarter project to date, Pebble raised more than $10.2 million for its smartwatches—although its target was only $100K. And consider Oculus VR. Back in 2012, Oculus set a Kickstarter goal of $250K for its Oculus Rift developer kit, a virtual reality headset. The company raised more than $2 million and recently sold itself to Facebook for $2 billion.

European start-ups, for the most part, don't benefit from the diversity of funding sources and the speed of access to money that U.S. companies enjoy. In Europe, innovators instead often grapple for government funding. Jumping through bureaucratic hoops takes time. It can also dampen the entrepreneurial spirit for emerging consumer technologies such as wearables. To be fair, Europe has far surpassed U.S. achievements in areas like sustainable energy, automotive design, and biomedical engineering—fields that often require more infrastructure and can afford longer design-to-delivery windows.

While government funding is also an important part of the Asian innovation engine, wearables start-ups are not the companies that are receiving the funding. So it will be the giants of the consumer-electronics industry in Asia—companies such as Sony, Samsung and LG—that will influence the development of wearable technology there. But China could prove to be an exception. With its booming entrepreneurialism in the consumer electronics industry (as well as the wealth generated by its rising creative class), it’s likely that at least some future wearables will not just be manufactured in China, but designed there, as well.

This is why the prospects for U.S. wearables makers have been looking pretty rosy, especially in the fitness/activity wristband specialty. Jawbone, a San Francisco-based, VC-funded company that makes the UP wristband, recently purchased Body Media, a Pittsburgh-based start-up that spun out of Carnegie Mellon. In March 2014, Intel acquired BASIS Science, a privately held company located in San Francisco, for its Basis bands. And, completing the power-triad of wristband developers, another San Francisco-based company, FitBit, stands successfully alone, selling both fitness/activity wristbands—Flex—and a cute little wireless activity tracker, Zip, which fits in a pocket or a bra.

And U.S. innovation goes beyond wristbands to other types of body-worn devices. Lumo BodyTech, a Stanford spinoff, offers two posture-saving applications: Lumo Back and Lumo Lift. And how did the company first get started? Again, Kickstarter, back in 2012. The company now has venture capital investment.

As executive director of a global trade association focused on micro-electromechanical systems (MEMS) and sensors, I care deeply about wearables because they would not exist without technologies like accelerometers, gyros, and magnetometers. And like many other industry types, I am eagerly anticipating “flexible electronics”—MEMS-based technologies with the potential to transform not just wearables but all kinds of electronic products.

 

Photo: MC10
MC10's biostamp can measure physiological parameters.

I particularly have my eye on MC10, a Cambridge, Mass., start-up with origins at the University of Illinois, Urbana-Champaign. MC10’s technology platform features a “bendable, stretchable, body-compatible electronic system” called the Biostamp—a soft, sensing sticker that can be placed anywhere on the body to measure for a variety of physiological parameters. MC10 is targeting wearable applications in the sports and fitness, consumer health, and regulated medical industries. The company launched its first commercial product, the Reebok Checklight head impact indicator last year, and will be launching the first of its Biostamp applications in 2015.

It is unequivocally true that technology innovation takes place all over the world, but when it comes to wearables and to some of the technology components and platforms that make wearables what they are, U.S.-based companies are ahead and will continue to lead the way.

Karen Lightman is executive director of MEMS Industry Group. She works with companies developing component-level technologies that are used to make wearables and with companies that create wearable products for consumers.

There Would Be No Silicon Valley Without New Jersey

In his recent “Reflections” column in IEEE Spectrum, Robert W. Lucky relates a little known (in fact, unknown to us) episode from the 1960s, wherein Bell Labs executives brought in Silicon Valley doyen Fred Terman to see if the success of Silicon Valley could be replicated in New Jersey.

That effort went nowhere, and Lucky concludes that New Jersey suffered from too much geographical isolation, a lack of social focus, and, most damning (at least according to Terman), no Stanford, with its “culture of engineering innovation.”

We believe that Lucky skipped over a few key facts in reaching his conclusions. First, despite the media’s fixation on Silicon Valley, a large and thriving technology corridor stretches from Boston to Philadelphia with its fulcrum in New Jersey. This is known as the Northeast Tech Corridor (NETC), which encompasses about 30,000 square kilometers, interconnected by a rigorous transportation infrastructure. Silicon Valley, on the other hand, covers just 500 km2, centered around Stanford.

Also worth noting is that the historical roots of innovation in the Northeast and especially New Jersey run deep. They took hold in the colonial period, were carried forward by Samuel Morse and Alfred Vail in the early 19th century, and reached a grand scale with Thomas Alva Edison and his industrial research lab in Menlo Park. Edison’s influence in turn attracted other inventors to the New Jersey region, and Bell Labs and a number of other laboratories inherited the tradition of his “invention factory.”

Silicon Valley has no such deep history. The name itself dates only from 1971 and didn’t become widely used until the 1980s. Back in the 1960s, any prominence the region enjoyed came mostly from Hewlett-Packard (known for audio oscillators and volt-meters), not from semiconductors. In New Jersey, on the other hand, Bell Labs had already invented the transistor (among many other developments too numerous to mention) and RCA had pioneered electronic television. Even as Silicon Valley was getting started, the NETC in general, and New Jersey in particular, continued to lead the country, if not the world, in high-tech innovation.

If we accept the argument that Silicon Valley did eventually, by the 1980s, eclipse New Jersey as the center of high tech, how then do we account for New Jersey’s early and continued success? And how do we explain its ultimate failure to keep up with the West Coast?

It is hard to evaluate the geography argument. Easy access to New York City (and therefore the sources of capital) through efficient public transportation would seem to be a plus, but Lucky suggests that it was a two-edged sword, so we will leave it at that.

But what about Terman’s reported main point about Stanford? The engineering institutions in New Jersey were certainly strong enough in the 1960s to promote growth —and their faculties were respected for their entrepreneurship. Stevens Institute of Technology, in Hoboken, was founded in 1870 with the first science-based engineering curriculum in the United States. In the 1960s it served as the main testing facility for NASA’s Apollo Program. Newark College of Engineering (today the New Jersey Institute of technology) opened in 1885 as the public technical school for the state. By the early 1960s it was granting doctoral degrees for the full range of engineering disciplines. Rutgers, the main state university, also had a strong engineering school. Although Princeton University is perhaps better known for science than engineering, it is one of the oldest and wealthiest universities in the United States, easily rivalling Stanford in academic caliber. And we won’t even mention the many institutions in nearby New York and Philadelphia (where the computer age was heralded by the development of ENIAC at the University of Pennsylvania).

It may be true that fewer technology start-ups emerged in New Jersey than in Silicon Valley—New Jersey’s engineers tended to cluster in the large laboratories such as Bell Labs, RCA, ITT Labs, and Philco. However, a number of these engineers, after learning their trade in the corporate labs, eventually left to start their own companies, though not necessarily in New Jersey. For example, Stevens graduate Eugene McDermott cut his teeth at the Western Electric research laboratory (predecessor to Bell Labs) and later cofounded Texas Instruments.

We will close with perhaps the most famous exit: William Shockley, who left Bell Labs in 1955, seven years after conceiving the junction transistor, and then formed Shockley Semiconductor in Mountain View, Calif., the region’s first silicon electronics laboratory. Thus, the “silicon” of Silicon Valley actually came from New Jersey.

A. Michael Noll, a New Jersey native, is a professor emeritus at the Annenberg School of the University of Southern California. Starting in the 1960s, he did basic research at Bell Labs, in Murray Hill, N.J., on 3-D computer graphics and animation, speech signal processing, and computer-generated art, among other topics. Michael N. Geselowitz, senior director of the IEEE History Center, leads the IEEE’s efforts to preserve and disseminate the proud heritage of IEEE technologies.

Goal Detection Technology for the Other Football

Last Sunday, during the World Cup game between France and Holland Honduras, FIFA's goal line detection technology passed its first field test. It was early in the second period when the French striker Karim Benzema nailed a shot right into the post that sent the ball blasting back across the goal line and bobbling into the hands of the Honduran goal tender. The crowd gasped and then went silent. It wasn't immediately apparent whether the ball had crossed the line.

Plays like this are decided by a lone referee on the field, and in the past, whether he made the right call depended entirely on whether he had a good line of sight through a throng of frenzied athletes (this 2010 World Cup goal goes to show that they don't always get it right). Now, however, the refs are getting backup from a system of high speed cameras that follow the ball in 3-D and send an alert when it crosses the goal line. Seconds after Benzema made his attack, the ref received a vibrating signal on his smartwatch, a point went up on the board, and the crowd broke into a jubilant chant. 

FIFA's goal line technology has improved the accuracy of point scoring to such a degree that one has to wonder whether other sports besides soccer might benefit from similar ball tracking systems. Now, it seems, American football may provide the next test case. Engineers at North Carolina University and Carnegie Mellon have teamed up with Disney Research to design a sensor-based system for determining the location and position of a football on the field and are beginning to test its accuracy. They describe the technology in IEEE's Antennas and Propagation Magazine.

American football presents certain challenges that do not exist with soccer. Sometimes, the most  important calls a ref can make happen on plays where the ball is buried beneath a scrum of sweaty, 150-kilogram men. In such cases, a visual detection system, such as the one now being used at the World Cup, would be useless. 

Instead, the NCU researchers are opting to send radio signals from transmitters hidden within the layers of the football and track it with receivers positioned around the field. According to the paper, engineers have tried similar approaches in the past, but ran into trouble because they were using a high frequency wave, of a kind that is easily absorbed by the human body. The researchers claims that results from these previous attempts failed because they did not produce a clear enough signal. 

This time around, engineers are using extremely low frequency radio waves (producing a wavelength that is hundreds of meters long) that are able to pass through the human body unmolested. The design they came up with requires that a transmitter and battery, weighing less than 30 grams, be embedded into the football. Eight receivers, placed on the perimeter of the field then calculate the location and orientation of the ball in 3-D.

Unfortunately, using low frequency waves solves one problem while creating another. This time, the interference comes from the ground which absorbs the signal and then re-emits it in a pattern called an eddy-current. In order to reduce the noise from this secondary signal, the engineers rigged the field with a set of optical instruments. While the receivers are keeping tabs on the signal transmitted from inside the football, the optical instruments collect information about the position of the receivers. This data can then be used to scrub out the signals coming from the ground. 

According to the researchers, their technique must be accurate to within half the length of a football—about 14 centimeters—in order to be considered reliable. Right now they report having it down to 77 centimeters.

For those of you wondering how the performance of a football might change after it's been stuffed with sensors, David Ricketts, one of the authors of the study, says that footballs are actually rather asymmetrical objects to begin with. "The American football is already unbalanced," he explained by email. "The laces cause a weight mis-distribution, so they counterweight it to compensate. The added weight of the transmitter would be handled in the same way."

If you'd like to see how the tracking system would work in a real game, the group has posted a video of their trials here.

Editor's note: The game referenced in the first sentence was between France and Honduras, not Holland as originally reported.

Algorithms Outperform Diabetics at Blood Sugar Control

Doing math or any other mental activity when you have low blood sugar is a recipe for errors. For sufferers of Type 1 diabetes, who must track and adjust their own blood sugar levels, it can mean more than inability to focus. It can be fatal. Yet a small real-world study announced Sunday at a meeting of the American Diabetes Association (ADA) in San Francisco offers hope that software could monitor blood sugar levels and adjust insulin levels for them, even outside of controlled settings such as hospitals.

The study's authors conducted parallel 10-day trials with 52 participants. One trial tested adolescents at a summer camp. A parallel trial consisted of adults with only minor restriction on their activity or diet.

Read More

Quantum Cascade Laser at the Heart of Spectrometer on a Chip

Mid-infrared spectrometry can assist chemical sensing, environmental monitoring, and disease diagnosis. Conventional mid-infrared spectrometers are highly developed, but sometimes bulky, assemblies: a broadband thermal emitter, an interferometer, a separate broadband detector, external optics, and a sample cell, feeding in to Fourier-transform analyzer to break the signal down and analyze absorption at a variety of frequencies.

Sometimes, though, you need quick answers when far from the laboratory. A research team at the Technical University of Vienna (TU Wien) is designing a single-chip device to meet what the leader of the team calls the “challenging task” of making “mid-infrared spectroscopy accessible to remote areas, where conventional power supply and laboratory equipment” are rare or nonexistent.

Benedikt Schwarz, lab chief Gottfried Strasser, and their colleagues at the university’s Institute for Solid State Electronics and Center for Micro- and Nanostructures have built an easy-to-fabricate lab-on-a-chip that integrates a mid-infrared (6.5-micrometers wavelength) laser, a plasmonic waveguide (which also functions as a sample chamber), and a detector into a device that can identify components in gases or liquids by their absorption signatures.

The laser and detector are, in fact, essentially the same device, a bi-functional quantum cascade laser/detector (QCLD). The QCLD can be switched from emitter to receiver by changing the applied bias, so the same fabrication process can produce both structures at the same time.  

 A conventional laser emits light when an electron crosses a band gap in a relatively homogeneous semiconductor to annihilate a hole—one hole, one photon. The quantum cascade device consists of nanometers-thin alternating layers of two materials. (In this case, 26 layers of indium-aluminum-arsenic alternating with indium-gallium-arsenic). In the QCLD, electrons can make an energy transition, emit a photon, and then tunnel into the next layer to emit another, creating the cascade and delivering a stronger signal.

The TU Wien group has published a good deal about the construction and control of quantum cascade lasers (see contributions to Applied Physics Letters and the IEEE Conference on Lasers and Electro-Optics), but the current work, reported in Nature Communications, focuses on the waveguide ”sample chamber.”

The waveguide, a stripe of silicon nitride laid down (by molecular beam epitaxy) between laser and detector, captures the laser beam as a plasmon, a wave that is confined in a restricted region stretching from just below the surface to a small distance above it. About 96 percent of the beam’s energy is confined in a layer above the stripe surface, and varying the stripe’s dimensions changes the parameters of the plasmon.

Schwarz and his colleagues evaluated stripes 15 micrometers wide, up to 500 nanometers thick, and 5, 50, and 100 micrometers long. In particular, the thicker the SiN layer, the more tightly the beam is constrained to the area above the stripe’s surface: while the beam might reach up to a height of 50 micrometers on a raw gold surface, it is compressed into a layer about one-tenth that thick over a 200-nanometer-thick bed of silicon nitride.

The waveguide tremendously boosts the coupling efficiency—the ratio between the energy received at the detector to the energy emitted from the laser. Without a waveguide, a laser and detector separated by 100 micrometers of raw gold has a coupling efficiency of almost zero. With the waveguide, the coupling efficiency can climb to about 40 percent.

This compression of the laser creates a virtual sample cell—one without walls, open to the environment, and operating at room temperature. A drop of fluid or a breath of gas can be sampled by bursts of 6.5 micrometer radiation, each 100 nanoseconds long, repeated 5000 times a second.

By gauging the amount of energy the sample absorbs, the single-chip spectrometer can measure its composition. The researchers tested their device with single drops of ethanol-water solution. In about a tenth of a second (since resolution increases with the square root of the sampling interval), the device could correctly gauge the ethanol concentration with a resolution of about 0.06 percent over the wide range from 0 to 60 percent (or, as I think of it, spring water to 120 proof).

With its wide dynamic range, high resolution, relatively simple and inexpensive fabrication, and single-chip robustness, the “monolithically integrated mid-infrared lab-on-a-chip” may be able to break analyses out of the lab and get them into the field.

Photo: TU Wien

U.S. Military Wants Laser-Armed Humvees to Shoot Down Drones

Laser weapons mounted aboard U.S. Navy ships and large trucks have already shown the power to shoot down flying drones during test trials. That early success has encouraged the U.S. military to fund a new effort to develop smaller versions of these anti-drone weapons that can fit light ground vehicles such as the military Humvee.

Read More

Memory Cells Built on Paper

A team based at the National Taiwan University in Taipei has used a combination of inkjet and screen printing to make small resistive RAM memory cells on paper. These are the first paper-based, nonvolatile memory devices, the team says (nonvolatile means that the device saves its data even when it's powered down). 

As Andrew Steckl outlined in his feature for IEEE Spectrum last year, paper has a lot of potential as a flexible material for printed electronics. The material is less expensive than other flexible materials, such as plastic. It boasts natural wicking properties that can be used to draw fluids into sensors. And it can be easily disposed of by shredding or burning.

Basic circuit components, such as wires, resistors, capacitors, transistors and diodes, have been printed on paper. But memory is one of the last frontiers, says graduate student Der-Hsien Lien, and it will be needed if we expect paper electronics to perform computation and store data.

Lien and his colleagues tackled the problem by setting out to build resistive RAM, or RRAM, memory cells. In this memory, the cell is sandwiched between two electrodes. An applied voltage pulls ions from one of the electrodes in the cell, which lowers the cell's resistance.

Image: Der-Hsien Lien et al. An RRAM cell printed on paper. The layers, from bottom to top, consist of paper, carbon, titanium oxide, and silver.

In this case, the team constructed memory cells first using screen printing to coat paper with a layer of carbon paste that would serve as the bottom electrode. An inkjet printer was then used to print lines consisting of titanium oxide nanoparticles. After those lines had dried, they formed top electrodes by depositing small dots of silver atop the nanoparticles.

The team conducted various tests to confirm the cells could switch between states by applying a positive or negative voltage and performed reliability tests to confirm that the memory cells retained their behavior even after the paper had been bent. The results were presented last Wednesday at the Symposium on VLSI Technology in Honolulu, Hawaii. 

Lien reported memory cells as small as 50 micrometers. These could potentially be packed together to store about 1000 bits per centimeter, which amounts, Lien says, to about 1 MB on a single side of a sheet of standard A4 paper. But the team reckons better inkjet printers, which are now capable of printing submicrometer features, could increase that memory capacity to 1 GB.

The capacity could go further still by building memory cells at the intersections of crossed lines (an approach that's dubbed crossbar memory). Crossbar structures should prove easier to stack, which means that even more cells could be packed in a given area in three dimensions, says Jr-Hau He, one of the team leaders.

The team is now looking for a partner who can help build the electronics associated with storing and reading information in the memory cells.

World Cup or World's Fair? Technology Takes Center Field at the Games in Brazil

Every four years, 32 countries from around the world send their fiercest, most battle-ready soccer players to compete in the sport's most celebrated international event: the World Cup. This year, the world has sent its finest innovations as well. The shoddy arenas in Brazil may themselves be a source of shame (during the first period of yesterday's opening match between Brazil and Croatia, about half the lights in the São Paulo stadium flickered and fizzled) but the fields and stands are brimming with technology.  

Some of it is on full display. The opening kick, which is not traditionally known for being a high tech moment, took on great significance yesterday afternoon when Juliano Pinto—a Brazilian whose athletic career ended after he was paralyzed from the waist down in a 2006 car crash—stood up from his wheelchair and knocked the ball forward while wearing a robotic exoskeleton. The contraption took its commands from a set of electrodes pasted to Pinto's scalp which detected and deciphered faint electrical signals from his brain. The demonstration showcases the pioneering work of Miguel Nicolelis, a Brazilian neuroscientist and brain-computer interface researcher.

The other technologies being unveiled at the World Cup are certainly more subtle, but they actually have the potential to influence the outcome of games. During the 11th minute of yesterday's inaugural game, fans in São Paulo cringed and winced as they watched the ball deflect off the left toe of Brazilian defender Marcelo Vieira and land in the back of his own team's net, marking the first time in World Cup history that Brazil has made a goal against itself. Even before the spectators could absorb what was happening, a new automated detection system, called GoalControl, was alerting the referees on the field that a successful goal had been made. Officials in the 2014 games are wearing smart watches that vibrate whenever a ball fully crosses the goal line. The system, which uses 14 high-speed cameras (seven pointed at each goal) to capture the ball's movement in 3-D, was shown to be effective in a trial at the 2013 Confederations Cup. GoalControl successfully detected all 68 goals made at that tournament. 

And then there is all the technology that fans bring with them to the stadiums. During the 2006 World Cup, the texting, posting, and tweeting of feverish fans generated 30 gigabytes of data traffic. And that was before Instagram existed. This year, analysts are expecting a cumulative total of 12.6 terabytes.

The IEEE Standards Association has broken down all of these World Cup technologies in this comprehensive infographic:

Goings On at the North Carolina Maker Faire

Maker Faire North Carolina has been maturing. When I visited the first Maker Fair NC in 2010, there were vendors, to be sure, but it was easy enough to find average weekend tinkerers—people not associated with any company or organized group—demonstrating their techno-handiwork. That was much less true of the fifth edition, which took place last weekend at the state fairgrounds in Raleigh.

There were still many interesting things to see and do; indeed, there was a lot more than at the first gathering four years ago. This year's event included lock-picking instruction, a learn-to-solder table, and a giant battlebot arena, to name some prominent attractions. I imagine this and other Maker Faires appeal to many more people now than when they first sprung up.

Still, I couldn’t help feeling a sense of loss. Sure, the gizmos were more numerous and more polished. But they were also more predictable, dominated by things that involved robots or 3-D printing. A group conducting high-altitude balloon launches was a welcome exception. The following video should give you a sense of what I mean:

There’s no question that this was an entertaining event for the whole family. (I brought my two kids, who much enjoyed it.) But somehow it didn’t really spark any wow moments or that "I-just-have-to-build-one-of-those" feeling. I suspect the reason has something to do with the way so much of the offbeat technical tinkering of five years ago has since become almost mainstream.

The Best ROI? A CS Degree from Carnegie Mellon

Engineering and computer science are tough. And tuition at top engineering schools can cost a pretty penny. For those trying to pick a program with good return-on-investment, a recent survey of schools that produce the top-earning engineering/CS graduates could come in handy.

Computer science grads from Carnegie Mellon University make the highest reported starting salaries, averaging $89,832, according to the survey by the higher-education unit of San Francisco-based online personal finance service NerdWallet. Second and third on the list are grads of the California Institute of Technology and Stanford University’s College of Engineering, with average starting salaries of $83,750 and $74,467 respectively.

NerdWallet looked at the top 100 national universities plus the top 30 liberal arts schools from US News & World Report’s list of top colleges. It averaged starting salaries for the classes of 2011, 2012 and 2013 for each school.

As this article on Forbes points out, the survey is by no means comprehensive, since many schools, including Harvard and Yale, don’t release salary data. But it’s one of the only reports on best-paying engineering schools.

Engineers are, of course, consistently big earners (some might say overpaid) in salary reports, with no shortage of jobs. Engineering and computer science graduates fill all but one spot on the list of top 10 paid majors for the class of 2014 in the latest Salary Survey report by the National Association of Colleges and Employers. This even though starting salaries for engineers rose only 0.3% between 2013 and 2014 as opposed to 3.7% for health science majors.

Bottom line: when it comes to earning well, you can’t go wrong by choosing engineering.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More