The New Brain Behind the Whiteboards—and More—for HBO’s "Silicon Valley"

This year many of the formulas, documents, and snippets of engineer-speak on HBO’s "Silicon Valley" will be coming from Stanford postdoc Dmitri Pavlichin

2 min read
Dmitri Pavlichin poses with whiteboards he created for the real world; he also does the math for HBO's Silicon Valley
Dmitri Pavlichin
Photo: Tekla Perry

When the HBO show that became “Silicon Valley” was still in development, and its creators decided its fictional startup would be in the compression business, they turned to Stanford professor Tsachy Weissman to come up with some novel and at least somewhat plausible compression technology. Weissman brought in electrical engineering graduate student Vinith Misra to help; Misra went on to field many technical questions for the show in its first two years, as a student and then as researcher working for IBM on the Watson team.

IBM was just fine with that relationship. But last year Misra changed jobs—he is now a senior data scientist at Netflix—and with HBO a Netflix competitor, Netflix was not so fine with the consulting arrangement. It was time to pass the baton. And who else to give it to but another student in Weissman’s Stanford lab—the one now seated at Misra’s former desk?

That student, Dmitri Pavlichin, is having a great time with the job.

“The gig is pretty irregular,” he says, “a month or two of nothing, then an intense couple of days, in which I have to put together something that is going to be included in the show, like a paper, or a whiteboard. They’ll give me a snippet of dialog to look at, or tell me that someone finds a document and I have to make the document be kind of interesting.”

The whiteboards themselves are redrawn, based on Pavlichin’s text or sketches (the notes on the whiteboard in the photo above, however, are in Pavlichin’s own writing).

Pavlichin isn’t the only compression expert consulting for the show; the number of consultants, he says, has expanded since Season 1.

In real life, Pavlichin, who has a Ph.D. in physics and wrote a thesis on quantum optics, is now a postdoc working on research in genomic compression, that is, the most efficient ways to compress the explosion of genomic data created by DNA sequencers. Will any of that technology make it onto the show? Pavlichin can’t say anything specific about upcoming episodes, but promises this season, which starts Sunday, will have more technical content than Season 2, which focused more on the business issues involved in creating products based on Pied Piper’s compression algorithm than the algorithm itself.

The Conversation (0)

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds
DarkBlue1

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less