What tech skills will earn software engineers and other technology professionals the highest salaries? Recruitment firm Dice set out to answer that question in its 2016 Salary Survey. Pulling in late January, the survey showed that the highest paid tech professionals, with an average salary of $128,958, had experience with SAP’s HANA platform, the MapReduce programming model ($125,009), or open-source platform Cloud Foundry ($124,038). Both HANA and MapReduce salaries fell slightly since 2015.
Three-dimensional printers, long a tool for makers, are aiming for the home and classroom market. Two of the early entrants, XYZ Printing and New Matter, were brave enough to allow me to borrow their kid-friendly models for several months to check them out: For XYZ, that’s the daVinci miniMaker; for New Matter, it’s the Mod-t. These under-$300 gadgets are said to be aimed at tweens and up (though the primary colors of the miniMaker seemed designed to appeal to far younger children). “Up” includes non-tech-savvy teachers and other adults interested in 3D printing who aren’t hard-core makers.
It’s a good time to be a software engineer; so says online recruitment firm Indeed.com. The company released its list of Best Jobs for 2017 today; software engineering and development posts dominated the top 25, and grabbed 7 out of the top 10 slots on the list. Indeed looked at both salary and demand (defined by number of job postings and growth in that number) in creating the rankings. It considered only jobs with a salary of at least $70,000 that had consistent growth in share of postings from 2013 to 2016.
Full stack developer, which boasts an average base salary of $110,770, came in at number one, with 641 postings per million in 2016 (a 122 percent growth in listings since 2013). But the highest salary honor goes to machine learning engineer, with an average base salary of $134,306.
A few non-software engineering jobs, along with some management positions, also made it into the top 25, along with one other completely unrelated career that’s always in demand: registered nurse, which came in at number 20. See details on the top ten overall jobs and top ten paying jobs below.
Who’s the next Jon Rubinstein? For millennials that name might not ring a bell. Rubinstein was the “podfather”: as head of hardware engineering at Apple he launched and ran the project that became the iPod, the product credited from turning Apple from a computer company into a consumer electronics company.
Facebook, it seems, would very much like to make a similar leap into consumer hardware. So they need lots of good engineers—including an EE “podfather”. Right now, according to Business Insider, the company has a number of hardware projects in development in its secretive Building 8, including augmented reality glasses and, potentially, a consumer drone.
I’m at a Whole Foods in Palo Alto with Dror Sharon, cofounder and CEO of Consumer Physics, based in San Francisco and Israel. Sharon is holding his smartphone and a tiny handheld device he calls SCiO, which is about the size of a TicTac box. We are browsing around the produce department, checking out the Brix level of various items. The Brix number represents the sugar content of a solution and, for fruits, is an indicator of whether or not a particular fruit has much flavor. The tomatoes, according to the SCiO’s accompanying smartphone app, are horrible; not a big surprise in March. The apples are mixed, there is only one variety Sharon would buy right now. The mangos, he proclaims, are just perfect, and contemplates filling a bag before we go.
We move onto the dairy case, where the labels of cellophane-wrapped cheeses provided only price and name. Sharon’s smartphone app popped up all sorts of additional information as he pointed the SCiO gadget at different chunks (still in their wrapping), including fat content, calories per gram, and protein content.
On the way to Whole Foods, we stopped outside a restaurant where two women were having brunch, and asked them if we could scan their food before they ate it. Sharon told them the strawberries would be excellent (they women agreed they were), but the whipped cream would be abnormally sweet, there was so much sugar in it wasn’t recognizable as dairy (it was).
It was all pretty magical, pointing a gadget at food and getting an instant analysis. To be fair, I can’t verify the accuracy of what I was seeing on the screen; I didn’t take the fruits and cheeses back to a laboratory to confirm the analysis using more traditional technology. But it certainly seemed real, real enough that I would be pretty excited to have this kind of technology built into my smart phone, given I have my phone out anyway when I’m grocery shopping to scan shelf tags in order to download coupons. And Sharon promises it is indeed coming into phones—as soon as the third quarter of this year in China, fourth quarter in the United States.
Here’s how SCiO works—and why it exists.
The gadget uses standard infrared spectroscopy; it measures the absorption of infrared light. It may not be as accurate as a benchtop spectrometer used in a laboratory environment, but Sharon says it makes up for this with its algorithms. The user starts out by simplifying the problem a bit by identifying the category of the item to be examined—it’s not “What fruit is this,” but, “This is an apple, is it any good?” Consumer Physics’ cloud-based software then taps into its knowledge base, for an apple, it defines “good” as “sweet” (hence the Brix measurement), and considers an apple’s typical range of sweetness based on thousands of scans. A graphic on the phone then places the apple on a quality range.
Besides having data on most fruits and vegetables, the system also knows about dairy products; for those, it provides information on calories and fat content. And it knows about the cocoa content of chocolate, the amount of alcohol in drinks, and the protein, fat, and calories in raw fish, poultry, beef, and pork. And while, to date, the focus has been on food, Sharon stresses that the technology works with all sorts of materials. The company has started holding workshops for people who want to develop their own databases.
Sharon had been wanting this kind of gadget for a long time before he finally set out to build one. He grew up on a farm in Israel; he was used to eating produce that hadn’t been shipped further than across the property. So, when he moved to Massachusetts for business school at MIT (his bachelor’s degree is in electrical engineering), he was surprised by just how tasteless he found the produce at local groceries. “The food just didn’t taste the same. And when I saw that I was buying grapes from Chile, I was sure something was not right about them.”
He decided that he should get himself something to determine whether or not the food in the stores was any good before he bought it, so he logged onto Amazon and searched for such a gadget. He didn’t find one. Disappointed, he resigned himself to occasionally buying tasteless produce or traveling 30 miles to a grocer he discovered that he could trust.
But about five years later, in 2010, after a few years working in the U.S. and then moving back to Israel, he came back to the idea. There ought to be a scanner that could give you useful information about the food you are about to buy, he insisted. He teamed up with Damian Goldring, a friend from his undergraduate days with a PhD in silicon photonics, and the two started investigating sensing technologies that, potentially, could be built into a phone. They landed on infrared spectrometry, and, in 2011, started Consumer Physics. In mid-2012, they rented one of those expensive, luggable, commercial spectrometers for a day and demonstrated to a large cellular service provider that the technology could be used to analyze food, doing a demo on chocolate mixtures that looked the same, but had different substances mixed in, like regular butter and peanut butter. “We’re going to put this into a phone,” Sharon said. (The company didn’t fund them.)
Sharon and Goldring may not have convinced that company, but they had convinced themselves, and began working on the technology, first on their own dime, and then with a little money from angel investors and crowd-sourced funding from OurCrowd. In early 2014, they were convinced enough that they could deliver the technology as a small Bluetooth peripheral—not inside a phone quite yet, but pretty close—to launch a Kickstarter campaign, pitching a $200 portable infrared spectrometer. Some 13,000 people signed up, ponying up about $2.7 million.
Things from Kickstarter funding to shipped product were not exactly smooth sailing. Come September of 2016, we reported that only 5000 of the Kickstarter backers had received products, far later than originally estimated, and many of the remaining backers were angry. To make things worse, the backers could no longer communicate with the company via Kickstarter, the page had been taken down in a trademark dispute over the name “SCiO”.
What happened? Sharon says the delays were due to manufacturing challenges, as well as a redesign to improve sensitivity, resistance to ambient light, and penetration depth. And the company has now fulfilled almost all of its Kickstarter orders, with the exception of customers who haven’t yet provided shipping addresses, have unique shipping requirements, or are choosing to wait for a Special Edition version of the gadget—that’s fewer than 10 percent of the backers, Sharon says.
But while the Kickstarter rollout was more than normally bumpy, the company’s efforts to get venture funding have born, well, fruit. After picking up some funding from angel investors and people using crowdfunding platform OurCrowd, Consumer Physics closed a round of venture investment led by Khosla Ventures. To date, Sharon said, funding totals over $25 million.
The company also lined up some critical partnerships: with Analog Devices, which worked with the company to reduce the size of the sensor package into something that will easily fit into smartphones and is manufacturing this version of the device; and with Chinese phone manufacturer Changhong, which will be incorporating the technology in the Changhong H2 smartphone starting in China in the third quarter of this year and in the U.S. towards the end of 2017. Consumers in China, Sharon points out, are particularly interested in checking food safety, given the history of problems with the food supply. Sharon hopes other smartphone manufacturers will follow, turning using a phone to scan food as common a practice as using one to photograph food.
Consumer Physics now has about 100 employees, with corporate offices in San Francisco, a sales team based in the Midwestern United States, and a development team in Israel. Dozens of people are scanning food 24/7, Sharon said, to increase the kinds of food that can be analyzed as well as the accuracy of the analysis.
While the initial applications surround food, Sharon says that the technology is not just for checking out food freshness and nutritional information; it’s good at analyzing body fat, and distinguishing real pharmaceuticals from their fake counterparts. “We’ve done a demo that distinguishes real Viagra from fake Viagra,” says Sharon. “That’s the most commonly counterfeited drug.”
Consumer Physics has, to date, shipped more than 3000 developer kits, and is hoping some interesting consumer applications will emerge. One such in the works by French company Terallion, Sharon said, is a kitchen scale, intended for diabetics, that can use SCiO’s analysis to allow it to give users accurate information about protein and carbohydrate content of the food they are about to eat. The company is also working directly with industrial partners, in particular, with those working to develop tools for digital agriculture.
About a year ago I met two Silicon Valley entrepreneurs named Andrew Radin, whose matching names generated competition over an Internet domain, a connection, and eventually, a company. The startup is called, twoXAR, an abbreviation of “two times Andrew Radin.” The pair presented an ambitious agenda, they were aiming to revolutionize drug discovery with an algorithm that could mine data sets to identify the most promising candidates for an effective drug, allowing pharmaceutical companies to fast track only the most likely prospects into testing.
Our platform does not use molecular modeling techniques. Instead, it uses twoXAR-developed AI-based algorithms trained on large and diverse sets of real world biomedical data about diseases and drugs to predict which molecules might be most effective. These biomedical data include gene expression measurements, protein interaction networks, and clinical records.
When I first met the two Andrews, they indicated that they’d had interest from several academic and pharmaceutical organizations, but only one partnership to announced: with the Department of Dermatology at Stanford’s School of Medicine to identify drug candidates targeting lymphatic malformation, epidermolysis bullosa simplex (EBS), and other rare disorders.
What a difference a year makes. Last month, twoXAR announced a partnership with Santen Inc., the U.S. subsidiary of Japanese ophthalmology company Santen Pharmaceutical, to collaborate on identifying new drug candidates for the treatment of glaucoma. Santen will have the exclusive right to commercialize drugs resulting from the work; twoXAR will share in any profits. (The financial details have not been released.)
People are just now getting comfortable with the idea that data from many electronic gadgets they use flies up to the cloud. But going forward, much of that data will stick closer to Earth, processed in hardware that lives at the so-called edge—for example, inside security cameras or drones.
That’s why Nvidia, the processor company whose graphics processing units (GPUs) are powering much of the boom in deep learning, is now focused on the edge. Deepu Talla, vice president and general manager of the company’s Tegra business unit, says bringing AI technology to the edge will make a new class of intelligent machines possible. “These devices will enable intelligent video analytics that keep our cities smarter and safer, new kinds of robots that optimize manufacturing, and new collaboration that makes long-distance work more efficient,” he said in a statement.
Why the move to the edge? At a press event held Tuesday in San Francisco, Talla gave four main reasons: bandwidth, latency, privacy, and availability. Bandwidth is becoming an issue for cloud processing, he indicated, particularly for video, because cameras in video applications such as public safety are moving to 4K resolution and increasing in numbers. “By 2020, there will be 1 billion cameras in the world doing public safety and streaming data,” he said. “There’s not enough upstream bandwidth available to send all this to the cloud.” So, processing at the edge will be an absolute necessity.
Latency, he said, becomes an issue in robotics and self-driving cars, applications in which decisions have to be made with lightning speed. Privacy, of course, is easier to protect when data isn’t moving around. And availability of the cloud, Talla pointed out, is an issue in many parts of the world where communications are limited.
“We will see AI transferring to the edge,” he said, with future intelligent applications using a combination of edge and cloud processing.
Nvidia, of course, wasn’t painting this glowing picture of edge computing without some self-interest. At the event, the company announced its new edge-processing platform, the Nvidia Jetson TX2. This credit card–size module is a plug-in replacement for the company’s Jetson TX1, designed for embedded computing. Depending on how it is applied, it can either run at twice the speed of its predecessor or use half the power. Detailed specs are here. The developer kit costs $600, $300 for educators; the production version will sell for $400.
Developers, showing off their work at the launch event, were happy to point out how they are using or intend to use internal AI processing at the edge. A few examples:
Cisco demonstrated a new collaboration device that will work with its Spark Board system and use AI to recognize people in the room, automatically select a field of view that emphasizes participants instead of empty chairs and adjust in response to people coming and going, and zoom in on people speaking.
Artec, a company with impressive 3D scanning technology that I’ve previously covered, showed a new scanner in development that extracts geometry and color information and stitches together a 3D file in real time; its previous scanners needed to be connected to a computer.
Teal Drones showed off a $1300 smart drone, shipping in three months, that can understand and react to what its cameras are seeing. Bob Miles, product and project manager, says putting AI on board will distinguish his drone from competitors. “My father, a farmer in Australia, spends about a third of his time counting cattle,” Miles told me. He thinks a smart drone could do that for him. He also imagines some fun apps—like playing hide and seek with your drone—as well as some more serious ones, like distinguishing aggressors from non-aggressors for law enforcement purposes.
EnRoute, another drone company, uses AI onboard to navigate and avoid objects. Moving to the Jetson TX2 from the current TX1, said Nvidia’s Barrett Williams, will enable the Zion drone to fly faster and still avoid objects.
Live Planet introduced a $10,000 4K 360-degree 3-D camera for live streaming of video; it uses AI to encode the 3D video in real time. “The camera produces a stream of 65 gigabytes,” chief strategy officer Khayyam Wakil said, far too much data to transmit to a cloud server. “We couldn’t do this product before Jetson,” he said.
It’s pretty clear by now that wastewater injection, a way of disposing of the brackish water used in fracking and other oil and gas drilling processes, can cause earthquakes. But, to date, the response to these injection-caused earthquakes has been reactive. After a recent earthquake in Oklahoma, the state ordered a shutdown of 37 disposal wells in the area.
This week, researchers at Stanford released a free software tool to enable energy companies and regulatory agencies to be more proactive—to calculate, before drilling a wellin a particular spot, the probability that an injection there will trigger an earthquake. The Fault Slip Potential tool uses information about known faults in an area, the way stresses act in the earth, and estimates of how much wastewater injection will increase the pore pressure (that is, the pressure of groundwater trapped within tiny spaces inside the rocks below the surface).
“Our tool provides a quantitative probabilistic approach for identifying at-risk faults so they can be avoided,” said graduate student Rall Walsh in a statement to the media. “Our aim is to make using this tool the first thing that’s done before an injection well is drilled.” For the project—funded by the Stanford Center for Induced and Triggered Seismicity (SCITS) and developed in collaboration with ExxonMobil—Walsh worked with Stanford professor Mark Zoback.
Editor’s Note: The following is a letter from the past, present, and future presidents of IEEE-USA in response to a story in View from the Valley that also appeared in the February 2017 issue of IEEE Spectrum as “H-1B Visas by the Numbers.”
As the current, past and future Presidents of IEEE-USA, we read with interest your article on H-1B visa pay [“H-1B Visas by the Numbers,” IEEE Spectrum, February 2017]. IEEE-USA has been actively working with Congress to fix the H-1B visa program for well over a decade. Our experience with the visa suggests that, while accurate, your article missed some essential truths about the H-1B program.
For example, you point out that, according to H1BPay.com, Facebook pays its software engineers in Menlo Park, on average, US $138,294, which is a pretty good salary. However, Smartorg pays software engineers on H-1B visas in Menlo Park only $80,000 annually, which is a ridiculously low salary for the San Jose region.
This difference illustrates an important point about H-1B visas. While some companies pay their H-1B employees’ salaries equivalent to what American workers get paid, many companies do not. In fact, most H-1B visas are used, not by Facebook and other big tech companies, but by outsourcing and consulting companies.
And the salaries paid by those companies tell a different story.
For example, Wipro, a large outsourcing company, paid its 104 program analysts in San Jose exactly $60,000 each in 2016. Brocade, in contrast, paid their programmer analysts $130,000 in the same city.
Similarly, Infosys, the largest user of H-1B visas, paid their 158 technology analysts in New York City, one of the most expensive cities in the world, $67,832 on average last year, not enough to rent a closet in that city.
A close look at H1BPay.com’s data shows that, as you move past the Googles and Microsofts of the IT world, H-1B salaries tend to cluster around the $65,000 to $75,000 level. There is a reason for this. If outsourcing companies pay their H-1B workers at least $60,000, the company is exempted from a number of regulations designed to prevent visa abuse.
But $60,000 is far below 2016 market rates for most tech jobs.
In 2014 (the last year we have good data), Infosys, Cognizant, Wipro, and Tata Consultancy used 21,695 visas, or more than 25 percent of all private-sector H-1B visas used that year. Microsoft, Google, Facebook, and Uber, for comparison, used only 1,763 visas, or 2 percent.
What’s the difference? Infosys, Cognizant, Wipro, and Tata are all outsourcing companies. Their business model involves using H-1B visas to bring low-cost workers into the United States and then renting those workers to other companies. Their competitive advantage is price. That is, they make their money by renting their workers for less than companies would have to pay American workers.
This is the real story of the H-1B visa. It is a tool used by companies to avoid hiring American workers, and avoid paying American wages. For every visa used by Google to hire a talented non-American for $126,000, ten Americans are replaced by outsourcing companies paying their H-1B workers $65,000.
This is why, IEEE-USA opposes efforts to expand the H-1B visa program.
In contrast, IEEE-USA supports expanding green card programs to make it easier for skilled non-Americans to become American citizens. Unlike H-1B workers, green card holders are paid the same wages as Americans. If not, the green card holders simply quit and find a better paying job—something H-1B workers typically cannot do.
Green cards and immigration, therefore, build our nation’s skilled labor pool and strengthen our economy, while H-1Bs undermine both.
America was built by green card holders, not guest workers.
How much should employers pay U.S. electrical engineers in 2017? That’s a question recruitment firm Randstad tries to answer in its 2017 engineering salary guide, released in February. The answer? It depends on the region.
For example, salaries for engineers in the Southeast with 3 to 10 years of experience, range from approximately US $78,000 to $94,000, and Pacific region salaries range from $88,000 to $106,000.
The 2017 Randstad salary guide covers a range of engineering specialties, including industrial, civil, and mechanical engineers, in nine U.S. regions. Detailed results are available here.