View From the Valley iconView From the Valley

A man in a brown jacket and tie gesticulating

Why Hardware Engineers Have to Think Like Cybercriminals, and Why Engineers Are Easy to Fool

The future of cybersecurity is in the hands of hardware engineers. That’s what Scott Borg, director of the U.S. Cyber Consequences Unit, told 130 chief technical officers, engineering directors, and key researchers from MEMS and sensors companies and laboratories Thursday morning.

Borg, speaking at the MEMS and Sensors Technical Congress, held on the campus of Stanford University, warned that “the people in this room are now moving into the crosshairs of cyberhackers in a way that has never happened before.”

And Borg should know. He and his colleagues at the Cyber Consequences Unit (a nonprofit research institute) predicted the Stuxnet attack and some major developments in cybercrime over the last 15 years.   

Increasingly, hackers are focusing on hardware, not on software, particularly equipment in industry, he indicated.

“Initially,” he said, “they focused on operations control, monitoring different locations from a central site. Then they moved to process control, including programmable logic controllers and local networks. Then they migrated to embedded devices and the ability to control individual pieces of equipment. Now they are migrating to the actual sensors, the MEMS devices.”

“You can imagine countless attacks manipulating physical things,” Borg said. And imagining those things definitely keeps him up at night—it’s not easy being a cybersecurity guru.

“Yesterday,” he said, while on a tour of a nanofab facility, “I saw tanks full of dangerous chemicals, controlled by computers moving things in and out. I immediately thought about which would be the prevailing direction of wind and how you could rupture the tanks with cyberattack. Whenever I look at an appliance, I think what could be done to it that causes maximum damage and embarrassment.”

The move to attacking hardware, just like any cyberattack, comes because hackers are thinking about the economics, Borg says. Hackers always profit in some way from their attacks, though the gain is not always monetary.

One way hardware hackers can profit by hurting a company can be by taking advantage of the resulting drop in its stock price; stock manipulation is a growth area for cybercrime in general, says Borg.

“There is a limit to how much you can steal from credit card fraud; there is no limit to how much you can make in taking a position in a market and making something happen,” Borg says. “You can short a company’s stock in a highly leveraged way, then attack the company in a way that makes stock fall, reinvest on the way down, and multiply your investment hundreds of times. This is a big growth area for cybercrime; it has been done multiple times already, but it is really just starting to get under way. This is going to be a huge area for cybercriminals.”

It is going to be up to engineers to stop this coming hardware cybercrime wave. And it’s not going to be easy because “engineers aren’t as easy to fool as scientists, but they are still really easy to fool.

“Engineers believe in data, in gauges, in measurements. They are a little less easy to fool than scientists in that they build physical systems that operate, and when they fail, they do have to try to figure out why and what real world effects are. But engineers aren’t used to dealing with unkind adversaries. They believe in statistics, where statistical distributions are normal, where probabilities can deal with independent variables. And statistics doesn’t work in a cyberworld. If you are up against a cunning adversary, who will behave in ways outside of normal, it is hard to use any of the techniques we use in the natural world. A cyberadversary will take advantage of unlikely circumstances.”

But, he said, if engineers, particularly design engineers, learn to understand the cybercriminal and think proactively about cyberattacks, they can often improve cybersecurity and do it for free.

“Increasing security isn’t always about layering on security [to a completed system], but about how you implement a certain function in the first place, and that choice often doesn’t cost more,” Borg says. “Decisions that are made in engineering at really fine-grained levels affect the costs of carrying out a cyberattack. Even a small sensor will have consequences for cybersecurity, not always in the immediate device, but as it develops into a product line.”

Engineers, therefore, need to look at their products from the standpoint of the attacker, and consider how attacker would benefit from cyberattack and how to make undertaking that attack more expensive. It’s all about working to increase an attacker’s costs, he says. 

“As we move into embedded controllers and microdevices, we move into a realm that cybersecurity specialists like me haven’t explored that much yet,” he says. “The hackers haven’t explored it yet either,” but, Borg warns, they will.

“You people are now in the crosshairs; [design] decisions you are making will have powerful security implications. They will in some cases wipe out your competitive advantage, or give you a huge one. Nobody can tell you what to do beyond what I’ve told you—that it’s all about the economics,” he says. “All I can do is make you aware of the world we have moved into, to make you aware that you are now in the crosshairs.”

Two road signs point in opposite directions, one reads "Fired," one reads "Hired"

Who’s Hiring? (Software Companies) Who’s Firing? (Hardware)

A sampling of 2017’s hiring and firing announcements show that the year, so far, is a generally good one for software engineers, not so good for their hardware counterparts. And while headline news is incomplete and anecdotal at best, more comprehensive statistics suggest that it represents a trend: software up, hardware down.

Here are layoff and workforce expansion plans made by tech companies so far this year that made the headlines.

In hiring news:

  • Uber is hiring like crazy in Pittsburgh; according to Quartz.com, it’s looking for 48 engineers, mostly people to work in artificial intelligence and robotics in its advanced technologies group. Recode reported, however, that some of those are replacements for 20 engineers who recently quit as part of a “mini civil war” in the division.
  • Uber competitor Grab has announced plans to hire 800 R&D engineers over the next two years to staff R&D centers in the United States and Asia, mostly working on machine learning, predictive data analytics, user interfaces, and digital payments.
  • Didi is hiring self-driving car engineers for its new Mountain View lab, expecting to have “dozens” of researchers working at that facility, focused on AI security and intelligent driving techniques. First, however, the company is looking to hire a technical recruiter.
  • Ford picked up 400 engineers—300 in Canada and 100 in the United States—to work on connected and autonomous cars. It doesn’t need to find these engineers in the wild, however, its picking them up from Blackberry.
  • Baidu announced in March that it plans to double its Sunnyvale R&D workforce, adding 150 engineers to its AI efforts.
  • Amazon in March announced that it will add 1300 jobs at East Palo Alto. A report by city staff indicated that the Amazon employees will be “primarily engaged in software programming and development.” That followed on an announcement that the company will hire 5000 for a wide range of technology development positions in the United Kingdom.
  • GE Healthcare in March announced that the company will hire 5000 software engineers by 2018, working mainly on analytics, cloud-based imaging, population health, and machine learning.
  • Bosch announced that it will hire 3000 engineers in India, mostly new graduates in software, IT, analytics, and other technologies.
  • And Indian outsourcing company Infosys announced that it will be hiring up to 10,000 technology professionals in the United States over the next two years, starting with recruiting 100 for a new office in Indianapolis.

In layoff news:

  • Oracle early this year launched a layoff of perhaps as many as 1800 engineers, mostly from operations with roots in its acquisition of Sun Microsystems.
  • Boeing cut 1332 engineering and technical jobs from its workforce so far this year, according to the union representing its engineers, with 300 more taking buyouts.
  • Western Digital in April announced that it will lay off a total of 182 employees at two California facilities.
  • Juniper Networks announced a layoff of about 6 percent of its workforce—about 570 people—in March, denying reports of a 9 percent cut.
  • Pandora announced early this year that it is cutting 7 percent of its workforce—or 155 employees.
  • GoPro in March announced plans to cut 270 jobs, following an announcement of 200 job cuts late last year.
  • Little flying camera maker Lily crashed this year, leaving about 40 without jobs and pre-order customers clamoring for refunds.

Companies both hiring and firing:

  • According to its 2016 annual report, IBM ended the year with a workforce of 380,000. That’s up slightly from 377,757 at the end of 2015. And the company announced late last year that it will hire 7000 employees in the United States this year. But IBM has also been steadily laying off tech workers around the world throughout last year and into this year, continuing an effort the company calls workforce rebalancing
  • Microsoft has also been churning employees, with 700 cut early this year as part of a 2016 announcement of 2850 in cuts, but the company is also hiring, aiming to sell software by subscription instead of for installation on computers.

What does this all mean? Generally, according to Daniel Culbertson, an economist with job search firm Indeed, it reflects a general trend towards software engineering and away from hardware engineering.

Culbertson pointed me to the U.S. Bureau of Labor Statistics (BLS) data, where software engineering jobs show up in the “computer/mathematical” category and more traditional electrical engineering and other hardware engineering jobs show up in the “architect/engineer” category. In 2014, he noted, there were far more people employed in that computer category—4.06 million compared with 2.5 million in that engineer category. And by 2024 the gap will widen, with the computer category projected to grow by 13 percent and the engineer category by 2.7 percent. “The only occupations expected to grow slower [than architect/engineer] are farming, production, and office and administrative support,” he notes, “all occupations likely to be impacted by automation.”

Drilling down a little further into the BLS data to look at software developers and programmers and computer hardware engineers, he notes that there are today about 1.6 million software developer and programmer jobs in the United States. That number is projected to grow by 12.5 percent through 2024. Meanwhile, he says, there are already fewer computer hardware engineer jobs, and the BLS only projects that category to grow by 3.1 percent through 2024.

“This is a trend we see in our data as well,” Culbertson said, noting that in the first quarter of 2017 Indeed posted 3.1 times as many jobs for software developers as for other kinds of engineers.

Culbertson also noted that the announcements of companies planning a hiring push to find people with expertise in artificial intelligence is also part of a trend; job postings on Indeed.com for AI positions nearly doubled year-over-year since April of last year.

HBO Silicon Valley character Richard Hendricks

HBO’s "Silicon Valley" Joins the Push for a Decentralized Web

In the first episode of the new season (Season 4) of HBO’s “Silicon Valley,” beleaguered entrepreneur Richard Hendricks, asked by eccentric venture capitalist Russ Hanneman, what, given unlimited time and resources, he would want to build.

“A new Internet,” says Hendricks.

“Why?” asks Hanneman.

Hendricks babbles about telescopes and the moon landing and calculators and the massive computing power in phones today, and says: “What if we used all those phones to build a massive network?... We use my compression algorithm to make everything small and efficient, to move things around…. If we could do it, we could build a completely decentralized version of our current Internet with no firewalls, no tolls, no government regulation, no spying. Information would be totally free in every sense of the word.”

Hel-lo! Decentralized Internet? That’s a concept I’ve heard bubbling around the tech world for a while now, but not so much in the consciousness of the general public. Is HBO’s “Silicon Valley” about to take the push for a Decentralized Web mainstream? And is what Hendricks talks about on the show really what the Decentralized Web is all about?

I contacted Brewster Kahle, founder of the Internet Archive and the pioneer of the Decentralized Web movement: he first pitched the idea of a Decentralized Web in February 2015, initially describing it as “Locking the Web Open,” at the first meeting hosted by NetGain, a partnership of some of the largest U.S. foundations aimed at strengthening digital society. In August of that year he published a manifesto (he calls it a white paper) making a detailed case for the Decentralized Web, and in June 2016 he hosted a conference to bring key potential players together to move the project forward.

The Decentralized Web, he told me, “would be everywhere and nowhere. There would be no web servers, it would be a peer-to-peer backend, so if any piece of hardware went down, you wouldn’t lose websites. It would be more like the Internet itself is today—if a piece goes down, you can route around the problem. The current Web isn’t like that.

“Today, if you stand in front of a website, you can tell all the traffic going to it. We know that GCHQ, the NSA of the United Kingdom, recorded all the IP addresses going into WikiLeaks.”

This kind of thing, he says, “would be far more difficult in a decentralized world.”

Is that what the fictional Hendricks was talking about? Kahle, who watched the episode, says yes, mostly.

Read More
Verdigris CEO Mark Chung has developed a system to track down energy wasters in homes and businesses by analyzing magnetic signals at the circuit breaker box.

Want to Know What’s Happening in a Building? Listen in at the Breaker Box, Says Startup Verdigris

Mark Chung was a chip guy. Not a software developer, certainly not a power systems engineer. He spent nearly 15 years in the semiconductor industry since getting his master’s and bachelor’s degrees in electrical engineering at Stanford in 1999. He’d been an engineer at AMD for nearly six years designing chips like the Athlon and Opteron, then at startup PA Semi working on microprocessors that, he anticipated, would go into Apple computers (Apple later purchased the company and the designs would end up in iPhones).

In 2008 Chung was a principal engineer at RMI, a company that later merged with Netlogic and was acquired by Broadcom. One month that year—a month he and his family had mostly spent out of town, he received a surprisingly large electrical bill: $560, when his typical bill was around $100. He called the local electric company, and a representative assured him that his smart meter was working just fine, and the bill was correct. At work that week, he got into what he called “an engineer debate” with colleague Jonathan Chu over what could have caused the large bill and how to trace the source of the problem. That weekend he and Chu purchased two Kill-A-Watt meters—inexpensive plug-in devices designed to monitor the electricity use of appliances—and the two went from room to room around Chung’s house, spot-checking various appliances and gadgets.

“We didn’t see anything wrong,” he says.

Perhaps, the two reasoned, the problem was intermittent, and could only be identified with longer-term monitoring. So they hacked the meters to add WiFi chipsets and send data onto the home’s wireless network, writing software on an old Dell Inspiron to gather and display the data.

No anomalies emerged. And the next month the electric bill was just as high.

“That’s when we had an epiphany,” Chung says. “We realized we were doing this a stupid way; the smart way would be to just look at one point at all the electricity and unpack it to figure out where it is going.”

Read More
The formula for the Weissman Score

Weissman Score Calculators Migrate Online, But Metric Needs Some Improvements

The first season of HBO’s “Silicon Valley” was all about the Weissman Score. Vying teams of compression experts used this made-for-TV metric designed to judge the success of compression algorithms.

During the second season, the TV show’s spotlight turned away from compression scores to business drama, as funding and power moved from player to player. But, off screen, the Weissman Score migrated to the real world, where a few professors tested it in papers and in classes. What’s happened since?

“People are indeed using it, but are trying to obfuscate its origins,” says Tsachy Weissman, the Stanford professor who, along with electrical engineering graduate student Vinith Misra, came up with the algorithm at the request of a technical advisor to the show. “They do that because they want to be taken seriously.”

It has certainly become easier to use since it first appeared on the show in 2014; GitHub and other open source code repositories offer downloadable scripts to allow developers to quickly generate Weissman scores for their code.

The first known Weissman score early adopters aren’t currently using the metric, however. Marcelo Weinberger, a distinguished scientist at the University of California at Berkeley, is no longer teaching information theory.

Jerry Gibson, a professor at the University of California at Santa Barbara, has offered it up to students to use in a final course project without telling the students where the formula came from; the one student who has taken him up on it used it to compare two different data compression schemes, in the same way it was used in the show.

The score has some issues, Gibson says. “It just includes the log of the time to compress as a factor in comparing compression algorithms in addition to the compression ratio, so more complicated schemes will be penalized.” Also, he says, “it is necessary to choose a common platform for the compression methods to use, and that choice can impact the final results.” Finally, he points out, the score, as used in the TV show, is misleading, “since the Weissman score implies lossless compression, which is never considered seriously for any video appplications since a much lower rate can be achieved by allowing losses that the human cannot perceive.”

As for why Gibson didn’t mention that the score came from the show, he says that he wanted the students to take the measure seriously, but it turned out that the student who did use it was indeed familiar with it from the show.

Dror Baron, an assistant professor of electrical and computer engineering at North Carolina State University, was planning to use it to replace scattergraphs in a paper he had written, but the paper was too far along in the review process to do so. He also has some concerns about what he sees as a minor weakness in the algorithm: its failure to define units. He's encouraging Weissman to fix this issue.

Baron explains as an example:

The score contains a term log(overbar(T))/log(T), where T and overbar(T) are runtimes for two algorithms.  Let's look at the ratio in three ways:

1. If the time units are 60 seconds and 30 seconds, respectively, then log(overbar(T))/log(T) = log(60)/log(30)=1.204.

2. If the units are minutes, then T=0.5, overbar(T)=1, and log(overbar(T))/log(T) = log(1)/log(0.5) = 0.

3. If the units are hours, then log(overbar(T))/log(T) = log(1/60)/log(1/120) = 0.8552.

You can see that varying the units gives pretty different results, which could be messy.”

Baron still believes, though, that “the score gets to the heart of the matter” of compression algorithms.

Correction made in units 26 April 2017.

Facebook's Yael Maguire explains the company's Tether-tenna, a helicopter and cable system to replace traditional cell phone antennas

Facebook Aims to Remake Telecom With Millimeter Waves and Tether-tennas

A world of millimeter-wave networks, laid out by computer, crisscrossing cities and into the stratosphere, where cell phone towers can be easily replaced by tethered autonomous copters—that’s the telecommunications infrastructure of the future. So says Facebook’s Yael Maguire, head of the company’s Connectivity Lab.

Speaking at Facebook’s F8 developer conference in San Jose today, Maguire said that Facebook is aiming to bring down the cost of connecting by an order of magnitude, working to develop the building blocks of more flexible and extensible networks.

Read More
Dmitri Pavlichin poses with whiteboards he created for the real world; he also does the math for HBO's Silicon Valley

The New Brain Behind the Whiteboards—and More—for HBO’s "Silicon Valley"

When the HBO show that became “Silicon Valley” was still in development, and its creators decided its fictional startup would be in the compression business, they turned to Stanford professor Tsachy Weissman to come up with some novel and at least somewhat plausible compression technology. Weissman brought in electrical engineering graduate student Vinith Misra to help; Misra went on to field many technical questions for the show in its first two years, as a student and then as researcher working for IBM on the Watson team.

IBM was just fine with that relationship. But last year Misra changed jobs—he is now a senior data scientist at Netflix—and with HBO a Netflix competitor, Netflix was not so fine with the consulting arrangement. It was time to pass the baton. And who else to give it to but another student in Weissman’s Stanford lab—the one now seated at Misra’s former desk?

That student, Dmitri Pavlichin, is having a great time with the job.

“The gig is pretty irregular,” he says, “a month or two of nothing, then an intense couple of days, in which I have to put together something that is going to be included in the show, like a paper, or a whiteboard. They’ll give me a snippet of dialog to look at, or tell me that someone finds a document and I have to make the document be kind of interesting.”

The whiteboards themselves are redrawn, based on Pavlichin’s text or sketches (the notes on the whiteboard in the photo above, however, are in Pavlichin’s own writing).

Pavlichin isn’t the only compression expert consulting for the show; the number of consultants, he says, has expanded since Season 1.

In real life, Pavlichin, who has a Ph.D. in physics and wrote a thesis on quantum optics, is now a postdoc working on research in genomic compression, that is, the most efficient ways to compress the explosion of genomic data created by DNA sequencers. Will any of that technology make it onto the show? Pavlichin can’t say anything specific about upcoming episodes, but promises this season, which starts Sunday, will have more technical content than Season 2, which focused more on the business issues involved in creating products based on Pied Piper’s compression algorithm than the algorithm itself.

Facebook's Mark Zuckerberg focuses on augmented reality and camera apps at Facebook's F8 conference

Facebook Is Going All In on Augmented Reality

Have you noticed that most Facebook apps these days have a camera button built in? Well, says Facebook CEO Mark Zuckerberg, now it’s time to use those buttons to turn on augmented reality for just about everything you’re doing in Facebook’s world.

“We are making the camera the first augmented reality platform,” Zuckerberg said, kicking off Facebook’s F8 developer conference in San Jose this morning. “I used to think glasses would be the first mainstream augmented reality platform,” he said. But he’s changed his mind.

By “camera,” Zuckerberg really means the camera button (which allows users to directly access a mobile device’s actual camera) and related photo processing tools in Facebook and related apps. Now, Zuckerberg said, Facebook is going to roll out tools to allow developers to create augmented reality experiences that can be reached through that photo feature. These tools will include precise location mapping, creation of 3D objects from 2D images, and object recognition.

Developers, he expects, will be able to apply these tools to generate virtual images that appear to interact directly with the real environment. For example, fish will swim around on your kitchen table and appear to go behind your real cereal bowl, virtual flowers will blossom on a real plant, virtual steam will come out of a real coffee mug, or a virtual companion’s mug will appear next to yours on your table in order to make your breakfast routine feel a little less lonely. Augmented reality will also allow users to leave notes for friends in specific locations—say, at a table in a particular restaurant—or let them view pop-up labels tagged to real world objects.

“Augmented reality will let us mix the digital and the physical,” Zuckerberg said in his keynote address to 4000 developers, “and that will make our physical reality better.”

Zuckerberg also predicted the advent of augmented reality street art, and suggested that as technology makes people working in traditional jobs more productive, more and more people will contribute to society through the arts.

Zuckerberg said that it will take a while to roll some of these experiences out into the world, but developers can get started now, with a closed Beta version of its AR Studio software now launching. Also available to users beginning today: a limited library of augmented effects.

IEEE Spectrum Senior Editor Tekla Perry, wearing a prototype light field display, is enthralled by a sea turtle swimming on the palm of her hand, observed using a prototype of Avegant's mixed reality technology

A Revealing Leap Into Avegant’s Magical Mixed-Reality World

I’m generally not the person you want testing your virtual, augmented, or otherwise “enhanced” reality technology. I am horribly susceptible to motion sickness, my presbyopia makes focusing on Google glass–like displays pretty much impossible, and even 3D movies do not make my eyes happy. Using a good virtual reality system, I can go maybe 30 seconds before I have escape to the real world; with a phone-based system, even a couple of seconds is too much.

But last week I spent at least 15 minutes (though it felt like less than five) completely engaged in a sampling of virtual worlds seen through Avegant’s mixed reality viewer. The experience was magical, enthralling, amazing, wonderful—pick your superlative. I didn’t get nauseous, or headachy, or feel any eyestrain at all. Indeed my eyes felt rested (probably because that was 15 minutes not spent in front of a computer or phone screen). Also a wonderful part of the experience: the fact that the company didn’t bother with extreme security measures or nondisclosure agreements (though executives are not talking specific technical details until patent filings are complete.

Avegant is a four-year-old Belmont, Calif., based startup. Its first product, the Glyph head-mounted display typically used for personal entertainment viewing, has been shipping since February of last year. (The name is a mashup of the names of the founders—Edward Tang and Allan Evans.)

The company announced its transparent Light Field Display technology last month. It hasn’t said when this will be ready for manufacture, though Tang points out that the Glyph’s success shows that the company knows how to design products for manufacture and bring them to market.

Avegant’s prototype mixed reality system uses a headband to position the Avegant display. It is driven by an IBM Windows PC with an Intel i7 processor and an Nvidia graphics card running the Unity game engine.

The images, explained cofounder Tang, now chief technology officer, are projected onto the retina by an array of MEMS micromirrors, each of which controls one pixel.

That, so far, is the same as the company’s Glyph system. But unlike a standard micromirror display, which reflects light straight at the person viewing it, these light field images are projected at different angles, mimicking the way light in the real world reflects off objects to hit a person’s eyes. The difference in these angles is particularly dramatic the closer someone is to the object, creating distinct and separate focal planes; the eye naturally refocuses when it moves from one plane to another.

To avoid having the eyes deal with these multiple focal planes, explained Tang, mixed reality systems like Microsoft’s HoloLens tend to keep viewers a meter or two away from objects. Light field technology, however, can use different focal planes for different objects simultaneously, so the user perceives even very close-up objects to be realistic. (Tang makes the case for light field technology in the video below.)

To date, Tang says, most attempts to bring light field technology into head-mounted displays have involved tricky-to-manufacture technology like deformable mirrors or liquid lenses, or approaches that take huge amounts of computing power to operate, like stacked LCDs.

“We created a new method,” he said, “that has no mechanical parts and uses existing manufacturing capabilities, with a level of computation that isn’t particularly high; it can run on standard PCs with graphics cards or mobile chipsets.”

The effect is designed to be natural—that is, you see virtual objects in the same way you normally see real objects, with no eye strain caused from struggling to focus. And, in the demo I was shown, it absolutely was.

I went through two mixed reality experiences in a slightly dim but not dark room with some basic furniture. The room was rigged with off-the-shelf motion tracking cameras to help map my position; the headset I wore was tethered to a PC. After a short calibration effort that allowed me to adjust the display to match the distance between my pupils, I entered a solar system visualization, walking among planets, peering up close at particular features (Earth seemed to be a little smaller than my head in this demo), and leaning even closer to trigger the playing of related audio.

Clear labels hovered near each planet, which brings up an interesting side note: I wasn’t wearing my reading glasses, but the labels, even close at hand, were quite clear. Tang mentioned that the developers have been discussing whether, for those of us who do need reading glasses, it would be more realistic to make the virtual objects as blurry as the real ones. I vote no, I didn’t find it jarring that my hand as I used it to reach for planets was a little fuzzy, particularly, perhaps, since the virtual objects were appearing brighter than real world ones. And it was quite lovely having so much of what I was seeing be clear.

At one point in the demo, while I was checking out asteroids near Saturn, Tang suggested that I step into the asteroid belt. I was a bit apprehensive; with my VR sickness history, it seemed that watching a flow of asteroids whizzing by me on both sides would be a uniquely bad idea, but it went just fine, and I could observe quite a bit of detail in the asteroids as they flowed past me.

The second demo involved a virtual fish tank. Tang asked me to walk over to a coffee table and look down at the surface; the fish tank then appeared, sitting on top of the table. I squatted next to the tank and put my hand into it. I reached out for a sea turtle; it was just the right size to fit in my palm. I followed it with my cupped hand for a while, and started feeling a whoosh of air across my palm whenever it swept its flippers back. I wondered for a moment if there was some virtual touch gear around, but it turned out to just be my mind filling in a few blanks in the very real scene. Tang then expanded the fish tank to fill the room; now that sea turtle was too big to hold, but I couldn’t resist trying to pet it. Then, he told me, “Check out that chair,” and in a moment, a school of tiny fish swept out from under the chair legs and swooped around the nearby furniture.

After convincing me to leave the fish demo (I was enjoying the experience of snorkeling without getting wet), Tang directed me to walk towards a female avatar. She was a computer-generated human that didn’t quite leave the uncanny valley—just a standard videogame avatar downloaded from a library, Tang said. But he pointed out that I could move up and invade her personal space and watch her expression change. And it certainly did seem that this avatar was in the room with me.

Throughout all the demos, I didn’t encounter any vision issues, focus struggles, or other discomfort as I looked back and forth between near and far and real and virtual objects.

I have not been one of the anointed few who have tested Magic Leap’s much-ballyhooed light-field-based mixed reality technology (and given the company’s extreme nondisclosure agreements, I likely couldn’t say much about it if I had). So, I don’t know how Avegant’s approach compares, though I’d be willing to put Avegant’s turtle up against Magic Leap’s elephant any day.

 What I do know is that it absolutely blew me away. I’m eager to see what developers eventually do with it, and I’m thrilled that I no longer have to struggle physically to visit virtual worlds.

An engineer measures himself against a wall-mounted salary chart

Where Are the U.S. Firms That Pay the Best Salaries? Silicon Valley (Mostly)

Tech companies in general, and Silicon Valley companies in particular, dominate a just-released list of the 25 highest-paying employers in the United States compiled by on-line recruitment firm Glassdoor. Providers of software and tech services claimed the most spots, but a few hardware providers—like graphics processing chip company Nvidia and network hardware maker Juniper—made the cut as well. Companies involved in networking, cloud computing, and data analytics have a big presence on the list. The only non-tech companies among their ranks, according to this analysis, are in the management consulting business.

As Glassdoor chief economist Andrew Chamberlain explained in a blog post, consulting firms pay high salaries because they want their employees to have established personal contacts and relationships in specialized fields.

“In technology,” he wrote, “we continue to see unprecedented salaries as the war for talent is still very active, largely due to the ongoing shortage of highly skilled workers needed.”

The Silicon Valley companies in the top 25 are:

  • VMware, based in Palo Alto, at number 3 with a median total compensation of $167,050
  • Splunk, based in San Francisco, at number 4 with a median total compensation of $161,010
  • Cadence Design Systems, based in San Jose, at number 5 with a median total compensation of $156,702
  • Google, based in Mountain View, at number 6 with a median total compensation of $155,250
  • Facebook, based in Menlo Park, at number 7 with a median total compensation of $155,000
  • Nvidia, based in Santa Clara, at number 8 with a median total compensation of $154,000
  • Amazon Lab126, based in Sunnyvale, at number 10 with a median total compensation of $152,800
  • Juniper Networks, based in Sunnyvale, at number 11 with a median total compensation of $150,000
  • LinkedIn, based in Mountain View, at number 12 with a median total compensation of $150,000
  • Salesforce, based in San Francisco, at number 13 with a median total compensation of $150,000
  • Synopsys, based in Mountain View, at number 15 with a median total compensation of $148,000
  • Informatica, based in Redwood City, at number 16 with a median total compensation of $147,400
  • Walmart eCommerce, based in San Bruno, at number 20 with a median total compensation of $143,500
  • Visa Inc. (a firm categorized as finance but in many ways behaves like tech company), based in Foster City, at number 21 with a median total compensation of $142,000)
  • Twitter, based in San Francisco, at number 22 with a median total compensation of $142,000
  • Palo Alto Networks, based in Santa Clara, at number 24 with a median total compensation of $140,020.

Tech companies outside the Bay Area that made the high-salaries list included Dimension Data, Broadcom, Microsoft, F5 Networks, and Akamai.

The full list is here.

Advertisement

View From the Valley

IEEE Spectrum’s blog featuring the people, places, and passions of the world of technologists in Silicon Valley and its environs.
Contact us:  t.perry@ieee.org

Editor
Tekla Perry
Palo Alto, Calif.
Load More