View From the Valley iconView From the Valley

Checking in with Andrew Ng at Baidu’s Blooming Silicon Valley Research Lab

The road winds through a construction zone just off the intersection of the 237 and 101 freeways in Sunnyvale, Calif. Scatterings of completed buildings, sporting new plantings of drought-tolerant grasses, are already occupied; other buildings are going up quickly, including a new fire station. There’s Nissan’s new Silicon Valley research center, a well-financed medical device startup called Spiracur, a digital cash startup called Quisk, and a biotech startup incubator. And there is Baidu’s Silicon Valley AI Lab—my destination along this dusty road crowded with construction vehicles.

It’s good to spend time in a new research lab; there’s not only fresh paint and hip decor—like living walls of plants—there are fresh, excited faces, and empty desks waiting to be filled.

There’s also a bit of symmetry to this visit. In mid-2014, I spent a morning on just the other side of nearby Moffett Field watching a far more somber group of researchers moving out of a suddenly closed division of Microsoft Research. Baidu’s Silicon Valley AI Lab had quietly started a few months earlier; now it has some 60 researchers—just about the same number that moved out of Microsoft Silicon Valley. One of the first technologies to emerge from the Baidu lab, its Deep Speech system that uses a single algorithm to learn both English and Mandarin, has just starting rolling out to some users in Beijing (see video). And in January, the lab released open source AI software.

The AI researchers aren’t alone in Baidu’s building. They share the space with about 100 other engineers and computer scientists working on other projects for the giant Chinese search company. Baidu is generally tapping into Silicon Valley talent to work on a variety of projects; the fastest-growing one besides AI is autonomous vehicle research.

The AI Lab and the autonomous vehicle group are both led by Andrew Ng, a professor at Stanford since 2002 (he holds the title “associate professor by courtesy” and continues to teach one class). He started the Google Brain project in 2011, and, while at Stanford, launched the first successful massive open online courses (MOOCs). In 2012, he spun them off to start the educational technology company Coursera. Today Ng, as chief scientist for Baidu, oversees the company’s research around the world, including the Silicon Valley AI Lab, as well as the company’s Institute of Deep Learning and Big Data Lab, both in Beijing.

Here’s what Ng has to say about how research works at Baidu, Baidu’s autonomous vehicle strategy, and plans for expansion.

On moving from Stanford to corporate research:

The pace here is much different from a university. At universities, the gap between publishing a paper about research and using the research to help normal people is huge. You would publish a paper, then, two years later, someone might read it who decides to work on it further, and years later it becomes something. Here the cycles of innovation are much faster.

We also think about research differently. Our goal is to help people, but the world has an infinite supply of interesting and important problems. We might have 50 projects we’d like to tackle—and we rely on feedback from users to help us prioritize them. That enables us to do better research.

We do indeed do basic research here; like a university, we value basic research. But we are an end-to-end research organization, we don’t want to throw a project over a wall, we want to follow it through to the end.

On Baidu’s approach to autonomous driving technology: 

We believe the approach of creating a car that can autonomously drive everywhere and be safe everywhere is beyond today’s technology. Instead, we are looking initially at shuttle routes and bus routes, routes that are, perhaps, a modest 20 miles, driven in a big circle, or back and forth.

We think if all you are doing is driving a 20-mile route, the technology is indeed within striking distance of making that safe. We plan to commercialize this in three years and will be moving aggressively to get this to market.


On the government’s role in advancing autonomous vehicle technology: 

With a modest change in regulations, you can bring autonomous vehicles into use much quicker. For example, a computer today cannot reliably distinguish between a construction worker waving at you to “stop” vs. “go.” By giving these workers a different way to communicate with the car—perhaps an app, or wireless beacon, or special signage—we can make the interaction clearer. The technology and regulations should be developed simultaneously. I hope to see the U.S. government and tech community working together on this.

On hiring plans:

We’ll be growing the Silicon Valley AI Lab and our autonomous driving presence significantly. I don’t have specific numbers to announce yet, but we will be hiring a significant number of people.

Updated 12 February 2016

A Former Nest Engineer Sees a Gap Between Indiegogo and Best Buy—and Fills it With B8ta

Vibhu Norby isn’t like most of the consumers who buy high tech hardware; he doesn’t wait until it shows up at Best Buy or Costco. He’s a classic early adopter who does most of his shopping on Kickstarter or Indiegogo, sites from which he’s willing to preorder the latest gadgets and wait months for delivery. He often buys this gear sight unseen, and realizes only after delivery that sometimes the product shipped doesn’t quite fulfill the promise of the product pitched. But that was okay for him.

He didn’t think much about how other people bought gadgets, or brick-and-mortar retail in general. He was a software developer doing software startups, and software was easy to sell online.

Then he landed at Nest (now part of Google), as an engineer for the smart thermostat company. There he discovered how tricky it is to sell hardware to the average consumer, who, unlike him, wants to see it, touch it, and ideally, try it at a store before spending a couple of hundred dollars to take it home.

“Nest,” Norby said, “did a good job of getting the product onto retail shelves, but inventory wasn’t being kept up, store associates weren’t being trained, and boxes were ending up scattered all over the floor. It was frustrating.”

Norby talked to his peers at other young consumer electronics hardware companies, and found out that they were even more frustrated—they couldn’t even get their products into normal retail stores. “One guy recently told me Apple told him it’d be six months before they could schedule a meeting; Frye’s told him it would take them a year to decide.”

In an industry in which you can build a prototype out of an Arduino kit in hours, and get a new product on a manufacturing line in China in months, nothing should take a year, Norby thought. “Retail,” he says, “needs to be fixed. It’s in a lot of trouble, and if it’s not fixed, it will take hardware manufacturers down with it.”

Last February, Norby left Nest on mission to fix retail. And in December he opened his first store, B8ta, in downtown Palo Alto.

Read More

Coding Without a Net at Yahoo, Part Two

In December, I reported on a frank discussion I’d had with Yahoo’s chief architect, Amotz Maimon, and the company’s senior vice president of science and technology, Jay Rossiter, on their decision to eliminate the quality assurance team. The idea, they said, was to force engineers to develop tools to better check their own code, and to think about their jobs differently, as part of a larger effort to shift the company’s software development approach from batch releases to continuous delivery. Maimon told me the approach was “100 percent working,” and Rossiter said it had reduced, rather than increased, the number of problems that went live.

That post triggered a lengthy and sometimes heated discussion—in the comments on the post itself, as well as on Slashdot and on Hacker News—about the role of quality assurance in software development today. The commenters had much to say about their own experiences, about quality assurance pro and con, and about Yahoo’s products. A few examples:

“They didn't STOP testing, they just automated it. Our company did the same years ago. Literally a one button, no monitoring process to: build across multiple architectures, test for several hours across each of them, package up the releases and (with a second button press) release to the web site. This is not hard, it just requires commitment to keep it maintained and to acknowledge it does not come for free (you can't just fire your QA time and expect the engineers to develop it in their free time).”

“The point is not to ‘remove QA’, but quite on the contrary, to remove the BARRIER between engineering and QA, to shorten the feedback and accountability loop. More, better QA, with less overhead.”

“This is the most stupid thing ever.... Of course there will be fewer bugs found if there are no testers!!! Doesn't mean to say that they aren't in the software!!!!”

“I have been using Yahoo and wondered how come I started facing issues in using the emails. Now I got the answer.’”

The commenters also had some key questions. I went back to Yahoo’s Maimon for answers.

Q: Given that developers are now doing their own testing, were their project loads changed to allow time for this?

Maimon: We asked developers to invest in test automation, not manual testing. There was an initial effort we executed without any major schedule changes. This stemmed from the work of our (former) QA team, who developed the test automation process. When compared with our manual testing efforts, our automated testing process increased overall speed and quality of results, which enabled us to avoid any significant impact. By eliminating the slow manual testing from the pipeline, we were able to increase our overall speed and productivity. Moving to continuous delivery also lowered the "unit of change", or size of changes pushed to production. We pushed multiple changes a day, but each change was smaller and simpler, which reduced complexity and risk in the release process, while it improved quality.

Q: Did Yahoo need to add developers?

Maimon: A certain portion of the QA people converted to developers, but we did not need to grow the organization further as a result of the change. Since productivity went up, we were able to get more done with the same amount of people.

Q: Do you have any data/numbers to back up claims that the change made for fewer errors and a faster development cycle?

Maimon: We measure all of these, but cannot release the actual numbers. The number of software updates pushed to production systems went up by four to five times; the overall number of incidents went down, as did the number of change-related incidents—that is, something that happens when a software change that’s pushed to production causes a failure. Overall, the relative number of software change-related failures went down by about 80 percent.

Q: Finally, do you have any evidence that the change made the development job “more fun?”

Maimon: Developers like speed, fast exposure of new development, and fast real-user feedback. As such, they liked the change once the initial effort was done.

Did Stephen Curry Inspire ESPN’s Virtual 3-Point Line?

Nearly 20 years ago, ESPN, the sports broadcasting network, began displaying a yellow virtual first down line when broadcasting football games on television. Developed by Sportvision, a small Silicon Valley company, that yellow line initially mystified fans: “Is it on the field, or not?” millions of viewers wondered.

These days, we can’t imagine watching football on TV without knowing exactly where that first-down line is. And that technology spawned a host of virtual graphics that augment sports action for onscreen viewers—most recently, the America’s Cup races. (Sportvision founder Stan Honey detailed that technology in “The Augmented America’s Cup.”)

Tonight, a new virtual line hits the TV screen: a virtual 3-point line for basketball, debuting with the tipoff of ABC's primetime broadcast of a National Basketball Association game pitting the San Antonio Spurs against the Cleveland Cavaliers. Unlike football, where the line indicating how far a team has to advance the ball in order to earn a fresh set of downs is constantly moving, the 3-point line is painted on the basketball court and never moves. But after the Golden State Warriors won last year's NBA championship and rattled off 24 consecutive wins this season before suffering their first loss—feats in no small measure due to the long-distance shooting wizardry of Warriors point guard Stephen Curry—ESPN decided to put 3-point shots in a virtual spotlight and make it clear immediately whether any attempt is successful.

The network’s “Virtual 3” technology lights up the line for every 3-point shot attempt. The illumination is turned off immediately if the player misses; if the ball goes in the basket, the line remains lit up until the ball is handed over to the other team. ESPN developed the technology in house, at the company’s Princeton Visual Technology Lab.

Read More

A High School Physics Teacher Turns Telescope Repairwoman—at the South Pole

Val Monticue’s undergraduate research project when she was an engineering student at Harvey Mudd College, in Claremont, Calif., looked at ways to build a telescope that would work despite being mounted on a 30-meter tower and buffeted by high winds in Antarctica. Just as important to the project was getting the telescope there in one piece. She was proud of the work she did, designing a control system that could keep the telescope pointed precisely in spite of the challenging environment.

Since she graduated in 2006, though, Monticue’s career path took her away from engineering: first, into the Brattle Group as a research analyst; and then, in 2008, into education. Monticue is now a physics teacher at Pinewood High School in Los Altos, Calif., in the heart of Silicon Valley.

Meanwhile, the Antarctic telescope project rolled forward, with the first generation, the Bicep1, deployed in January 2006, and the third in the series installed in January 2015. (Bicep stands for Background Imaging of Cosmic Radiation, and the systems are being used to investigate the inflation theory of the expansion of the universe after the Big Bang.)

Monticue informally kept track of Bicep’s progress, but says she had found her true calling in teaching—which, she says, isn’t that much different from engineering. “The iterative design process used in engineering is the heart and soul of teaching; you are always looking at what you’ve done before, what worked, and how to make it better. My engineering degree taught me more about teaching than my credentialing classes.”

Still, that didn’t mean she didn’t sometimes miss more traditional engineering. And so she applied for a summer fellowship through the Industry Initiatives for Science and Math Education program. Because of the several papers on the Bicep telescope she contributed to back in her college days, the program was able to place her in the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford University. She spent the summers of 2014 and 2015 there working on Bicep3.

Over the first summer, she helped dismantle and package the telescope assembly to prepare it for the cross-country shipment from California to Massachusetts for testing at Harvard. A year later, Monticue joined a group trying to figure out how to improve the telescope’s refrigeration systems. It had already been deployed in Antarctica, but was not working as well as expected due an inability to consistently keep the microwave detectors at cryogenic temperatures.

Monticue was wrapping up her work at Kavli in 2015 when she got a surprising invitation: Would she like to go to the South Pole and work directly on the telescope repairs? She took a leave of absence from her teaching position, and on 7 November, left for a six-week trip to Antarctica.

There, she helped disassemble the telescope, check for leaks in its vacuum jacket, and run tests of different types of refrigeration systems at the various tilt angles of the telescope. “I never got near the real innards,” she said. “People who had worked much longer with the telescope did that. But we were all improving the structure, adding sensors, deciding how to adjust things. It was a matter of fixing lots of little fiddly bits to improve the whole system.” She’s still waiting to hear whether or not the repair effort worked.

On her last day at the Pole, Monticue helped put the whole thing back together.

“The engineering was done, and it was ready for the science to happen,” she says. “Turns out throughout my career I’m always there for the engineering, but not the science.”

She did attempt to conduct a few scientific experiments—at the high school physics level—while at the South Pole, using gadgets three companies (some can’t be named due to nondisclosure agreements) hope to eventually sell to physics educators. Among these are UV sensors, a pyrometer, and the PocketLab package of sensors.

“I tried to run a gravity experiment, but the data was noisy, and hard to analyze. I was able to play with the magnetic field; in most places on Earth it is strong side to side, but at the South Pole it is strong up and down, so that was interesting.” Unfortunately, she says, she couldn’t save any data so wasn’t able to bring it back for use in the classroom. The problem was the lack of significant onboard storage, she says; the equipment was intended to connect to the Internet wirelessly, via apps on a smart mobile device, and the South Pole base is a wireless-free zone, because of potential interference of wireless signals with telescope data.

Still, Monticue said, “I did end up stress testing the equipment in the harsh Antarctic environment, and it did keep working.”

Would Monticue go back for another stint as a telescope repairwoman? In a heartbeat, she says, as long as her school doesn’t mind. “I want to go back, but my career isn’t being an engineer at the South Pole, my career is being a physics teacher at Pinewood.”

Lily’s Flying Camera Drone Is Flying Off of Virtual Shelves

Last May, Lily Robotics was a five-person startup tucked into a garage behind a crowded and run-down hacker hostel in Atherton, Calif. Its young cofounders, Antoine Balaresque and Henry Bradlow, were fresh out of the University of California, Berkeley. Their vision—to build something that looked a lot like a drone, but functioned as a flying camera, no piloting skills required. At the time, I profiled them as the quintessential Silicon Valley startup—long on enthusiasm and short on cash, and described their technical approach: multiple on-board cameras with independent microcontrollers and a video processor to guide and stabilize the craft as well as to shoot videos.

This month, I checked in on Lily. In spite of a few technical bumps in the road, the company is flying high.

In December, Lily announced that it had closed an investment round of $15 million, mostly from Spark Capital, but also from individual angels including musician Steve Aoki and former quarterback Joe Montana. This month, the company reported $34 million in pre-sales (at $499 to $799, the company has been steadily ramping up its preorder price to move towards its planned $999 list price). Lily now has nearly 40 employees, and has moved to a large office in San Francisco’s South of Market district, a couple of blocks from Pinterest. Among the new hires, said Lily’s current head of communications, Kelly Coyne, are Doug Chan, former head of camera operations at Nest Labs and vice president of operations at Dropcam, and a manufacturing team that worked together on the Flip Video cameras and the Dropcam.

The company has already built 500 flying cameras and put most into the hands of beta testers. Early feedback shows that the cameras are being used far less for action sports than for taking videos of family and pets, Coyne said. (That is good news for Lily, because the pet video market is a lot bigger than the action sports market.)

The company, she said, has also been surveying pre-order customers. “Almost all of them,” Coyne said, “are first-time drone users who would never buy a drone. They are seeing this as a flying camera, something they are much more willing to incorporate in their lives.”

Those technical glitches? They’ve involved the software, mostly the flight controls. “Making something that is following you around taking video that is perfect and seamless when you move left and right, or stop or jump, takes a lot of work to get perfect,” Coyne said. In a letter to its pre-order customers explaining that the products would ship this summer instead of in February, as originally planned, the company said “We hit roadblocks with our flight software. Component optimizations required us to redesign core parts of our flight software to achieve smoother and more stable flight.” The statement also indicated that the company discovered it needed to add a sonar sensor for flight stability. (The original sensor list included an accelerometer, a gyroscope, a barometer, a magnetometer, and a GPS device, but stabilization was mostly done using downward looking cameras and image processing.) And Lily discovered it needed to upgrade the hardware it had been using for image processing to better track the subject being photographed.

With all this going on the founders have been busy—but not too busy to move out of that hacker hostel to more comfortable housing closer to Lily’s new office (though Coyne admits that Bradlow just moved into a nicer hacker hostel.)

Is Velo3D Poised to Revolutionize 3-D Printing—and Robotics?

Velo3D, based in Santa Clara, Calif., has $22.1 million in venture investment to do something in 3-D printing: That makes it fourth among 2015’s best-funded stealth-mode tech companies in the United States, according to CB Insights. This dollar number is about all the hard news that has come out of this startup, founded in 2014 by Benyamin Butler and Erel Milshtein. But job postings, talks at conferences, and other breadcrumbs left along Velo3D's development trail—has created a sketchy outline of this company’s plans.

Consider which 3-D printing technology is ready for disruption: metal. 3-D printing of plastics took off after 2009, when a key patent that covered the deposition technology expired; we now have desktop printers for 3-D plastic objects as cheap as $350. Printing of metal objects—done regularly in industry, particularly aerospace—uses a different, and, to date, far more expensive technology: selective laser sintering. This technology melts metal powders into solid shapes; it requires high temperatures, and far more complicated equipment than what’s found in the layering sort of printers used for plastic. The patent for this technology expired in early 2014—just before the formation of Velo3D. At the time, industry experts indicated that there wouldn’t be cheap metal printers coming anytime soon, but rather, would only come after “a significant breakthrough on the materials side,” OpenSLS’s Andreas Bastian told GigaOm in 2014. Could Velo3D’s founders have that breakthrough figured out?

They may have. The two founders, Butler and Milshtein, have developed technology together before. They both worked at First Solar, Solyndra, and Applied Materials, and along the way picked up joint patents on various mechanisms for rotating semiconductor materials, cutting them, and directing beams of radiation. Clearly they know something about state of the art manufacturing processes.

And if they haven’t yet found the answer, they certainly think they can get there, as their latest want-ads show: the company has listings posted for a lead mechanical engineer experienced with the implementation of laser scanning systems who is interested in “revolutionizing metal manufacturing.” It’s also looking for multiple recent graduates in physics or materials science with lab experience to implement its “disruptive vision of metal 3D printing.” And the company last year petitioned for an H1B visa for a senior metallurgist.

Now what is Velo3D going to do with a revolutionary new metal 3D printing technology, once it has come up with one? Based on the name, my initial reaction is custom bicycle parts, but the signs point instead towards robot parts. At least, 3-D printed metal robots are what Golem Robotics founder Ofer Shochet thinks is the next big thing. Shochet, who just happens to be one of Velo3D’s directors, led a panel on the subject at the RoboUniverse conference this past November. The world, Shochet said, as reported in 3DPrint.com is going to see a greater convergence of robotics and 3-D printing. Panelist Ryan Sybrant from Stratsys went on to explain 3-D printing is key for robotics because it allows structures to be created that couldn’t be made otherwise, lighter parts for more dynamic robots, and robots that can be easily customized.

One other breadcrumb along the trail: Velo3D shares an address with an established 3-D printing company—Octave Systems, a distributor of multiple brands of home and hobbyist 3-D printers and supplies. That’s a handy roommate to have if you’re trying to understand and disrupt an industry.

Tech Salaries Jump 7.7%

It’s a good time to be a techie in the U.S. That’s the takeaway of a report from job search firm Dice, which released its 2015 Salary Survey this morning. According to the Dice data, the average salary of a technology professional in the United States climbed 7.7 percent from 2014 to $96,370 a year. Average salary increases were the highest for entry-level jobs; however, the more experienced tech professionals were more likely to receive bonuses, with the average bonus hitting $10,194, up seven percent from 2014.

Read More

Sensors Slip into the Brain, Then Dissolve When the Job Is Done

Five days. That’s how long intracranial pressure and temperature typically need to be monitored in the case of traumatic brain injury. And that’s at least how long flexible, dissolvable sensors created by a research team at the University of Illinois led by professor John Rogers will operate accurately.

Read More

Move Over Wearables. Make Way for Implantables

Wearable monitors for health and fitness seemed to be everywhere in the exhibit halls and on the conference stages at CES 2016. But while this generation of biometric monitoring devices goes mainstream, a little Silicon Valley company is working on what could be the next generation of body sensing technology: the injectable.

In a small suite high above the CES convention floor, South San Francisco-based Profusa last week demonstrated the Lumee Oxygen Sensing System, the first of what it expects to be a line of biocompatible sensors. This tiny, flexible sensor is about the thickness of a few human hairs and the length of a piece of long-grain rice. It’s made of hydrogel, a substance similar to the material in contact lenses, but is permeated with fluorescent dye. It’s designed to sit under the skin to monitor the levels of oxygen in the surrounding tissue. The company expects to market the device to help people monitor peripheral artery disease, wound healing, and, eventually, for athletes, muscle performance. Profusa has been in stealth mode since 2009, supporting its research with approximately US $10 million in grants and $15 million in venture financing, CEO Ben Hwang told me.

Read More
Advertisement

View From the Valley

IEEE Spectrum’s blog featuring the people, places, and passions of the world of technologists in Silicon Valley and its environs.
Contact us:  t.perry@ieee.org

Senior Editor
Tekla Perry
Palo Alto
Load More