WebAssembly Will Finally Let You Run High-Performance Applications in Your Browser

Online applications could work as smoothly as the programs you install on your machine

Advertisement

What if you could share a computer-aided design (CAD) model and even allow a colleague to manipulate it from afar? “Click on this link, check out my design, and feel free to add more holes or fill some in,” you might say. You wouldn’t have to instruct your distant coworker to install special software or worry about whether her operating system could run it. Imagine that all your programs and data were stored in the cloud and that even computationally intensive applications like multimedia editing ran just as well in your browser as they would if they had been installed locally.

Since the early days of the World Wide Web, a lot of smart, passionate people have wanted to make it into a vehicle for running almost any kind of program. What makes that dream so tantalizing is that the Web is different from other software platforms. It’s defined by open standards, so anyone can build on it. It’s not owned by any company, so developers are beholden only to their users. And it’s constructed largely around open-source technologies, so it has the potential to be very democratic.

Over the past three decades, a whole generation of developers has worked to make this vision a reality. We’ve added new capabilities to the Web, like audio and video streaming, 2D and 3D graphics, typography, peer-to-peer communication, data storage, offline browsing, as well as multitouch, location, and camera inputs. But we continue to struggle with performance, specifically the ability to run Web applications as quickly as non-Web applications.

Seven years ago, the team I work with at Mozilla chose to focus on one of the oldest and hardest obstacles to progress: the performance of the JavaScript programming language. JavaScript is one of three pillars of modern Web pages, which use Hypertext Markup Language (HTML) to describe their layout, cascading style sheets (CSS) to specify how their components are styled, and JavaScript code to provide a variety of rich interactive experiences.

We overcame this hurdle in two ways. First, my colleagues and I were able to make giant, complex programs run almost as fast inside a browser as they do when installed directly on computers. Then, we worked with other browser companies to create a new Web standard called WebAssembly, making safe and efficient-by-design code now officially part of the Web.

Before we began this journey, Web developers had two choices. They could use existing Web technologies—HTML, CSS, and JavaScript—to create an application that ran on existing Web browsers. Or they could create a browser plug-in that users would have to download and install. Neither approach was ideal.

The first option was convenient, simple, and safe for users. But it delivered a janky experience for compute-heavy applications: Visuals would freeze, audio would drop, and apps would take seconds or longer to respond to user input. Plug-ins let browsers run programs—even demanding video games—smoothly and quickly, but they require users to download and install software. And such downloads can inject malicious code into the user’s system. Plus, browser plug-ins run only on the specific browsers they are developed for. So if you are using another type of browser, you’re out of luck.

WebAssembly now provides a new option. It takes advantage of the technologies already present on the Web—⁠JavaScript, the engines that run JavaScript inside browsers, and established browser security properties—and optimizes things so that code can be read and executed faster.

This approach made it easier to gain support from other browser companies because it meant less work and less risk for them. Today, Mozilla Firefox, Google Chrome, Apple Safari, and Microsoft Edge all provide support for WebAssembly in their most recent versions. So you can expect the Web to be immensely more powerful from here on out. Here’s how that change came about.

Like a lot of stories about tech innovation, this one started with video games. The thinking from team leader Martin Best of Mozilla was this: If we could make games run well on the Web, other computationally intensive applications would soon follow.

When Mozilla began its games program, a lot of its engineers focused their efforts, naturally enough, on JavaScript performance. It was an intense and exciting time, with people using cutting-edge academic research and an array of optimization techniques from industry. With these, our team was able to speed up JavaScript so it could deliver something close to a native-software experience, the gold standard.

The problem was that these performance gains weren’t consistent. Sometimes the code ran really fast, but sometimes it didn’t. The design of JavaScript made it difficult to determine when and where the slowdowns were occurring, and it was almost impossible to predict when they’d crop up.

My Mozilla colleague Alon Zakai had a wacky idea that could address such problems: He wanted to take a game he had helped write in C++ and convert it to JavaScript code that would run well on the Web. This was in 2010, and back then, converting C++ to JavaScript was unthinkable.

Games written in C++ can be huge programs, with millions of lines of code, far too many for JavaScript to handle well in those days. Worse, games stress the underlying platform, because they have several different components—audio, video, physics, artificial intelligence—that need to run in a coordinated way. It was a complete mystery to everyone how all these computationally intensive functions could make the leap from a fast and powerful language like C++ to JavaScript, a Web language originally designed to view lightweight hypertext documents.

At the time, even Zakai thought it was a strange idea. But he didn’t want to rewrite his game line by line in JavaScript, and he was curious: Could it be done? So he started working to adapt an open-source tool that could translate C++ code into JavaScript automatically. He called his project Emscripten, which is a mash-up of “script” from JavaScript and “embiggen” from the TV show “The Simpsons.” (Springfield’s motto: “A noble spirit embiggens the smallest man.”)

Zakai worked on Emscripten for two years, first as a side project, and then as part of Mozilla’s games program. And team leader Best discovered that large game companies elsewhere were experiencing the same pain as Zakai: They didn’t want to spend time and money rewriting their massive games in JavaScript so that they would run on the Web.

So Zakai kept going with Emscripten until it worked. The first time he demonstrated his C++ game running in the browser, it shocked everyone in the room. When it popped up on the screen, all the components were there: graphics, audio, a changing 3D perspective, multiplayer interactions, and a choice of weapons. If there had been any doubts before, his demo put them to rest. In seconds, the other team members went online, clicked into the game, and started shooting rockets at one another in happy amazement.

What was even more surprising was that this JavaScript code ran incredibly fast—faster even than handwritten JavaScript. You would have expected that code automatically translated from one language to another would be messier and slower than code written by experienced programmers, who can analyze and optimize as they go. But that wasn’t the case.

What was going on? To understand the answer, you need to know that writing C++ requires programmers to create objects with precise characteristics. By comparison, JavaScript is permissive and highly flexible. It turns out that during the translation step, Emscripten preserved the strictness of the C++, allowing the resulting JavaScript to run more efficiently. It was a monumental result.

With Emscripten basically working, Best wanted to see what it could do for commercial software. Could it translate a large, industrial-strength game to run on the Web? Developers of a popular game engine called Unity invited us to work alongside them in an experimental sprint to see how far we could get.

To everyone’s surprise, our combined team got the Unity game engine online inside a week. But the demo took half a minute to load, and when it ran, it stuttered mightily. We soon diagnosed the problem: The browser engine was getting bogged down because it needed to parse, analyze, and optimize the Emscripten-generated JavaScript as it was running.

Here’s where my part comes in. I work on the performance of the JavaScript engine in Firefox. As I studied the problem, I discovered that it is possible to form a contract of sorts between Emscripten and the engine that runs the code Emscripten produces: If you feed the engine only certain specific patterns of input, the engine can reliably run it really, really fast.

I shared this discovery with Best and suggested that Zakai modify Emscripten to output only those patterns that run fast. Meanwhile, I would optimize the JavaScript engine in Firefox to run the resulting code even faster. Best was almost as excited as I was. With a “Make it so!” from him and Brendan Eich, the creator of JavaScript and then chief technical officer of Mozilla, I started pursuing this new avenue.

Working with Zakai and enlisting the help of Mozilla research director Dave Herman, we were able to formalize the permitted JavaScript patterns, to make the contract between Emscripten and the browser completely clear. We named the resulting subset of JavaScript asm.js, which became a distinct language in itself (“asm” being the programmer abbreviation for Assembly, a type of code that is very close to the CPU’s machine language, and “.js” being short for JavaScript).

The new asm.js made things run faster because JavaScript is what’s called a dynamically typed language. That means that the kind of data denoted by a variable isn’t necessarily known until the program is already running, making it harder for a computer to optimize ahead of time the code that is soon to be executed.

To make JavaScript faster, we needed to create a static type system for the language, meaning that the code has to declare up front what kind of data it acts on. Is this variable a number, a string of characters, or a more complicated object? Static typing asks the programmer to do some preliminary work to answer that question.

After a few months of programming, I had the optimizations for asm.js working in Firefox, and we were ready to try it out, this time on another equally beefy game platform, Epic’s Unreal Engine. It took just a few days to get the entire multimillion-line code base running inside the Firefox browser—the required work involving things like audio and graphics, which are supported in a browser differently than they are in a computer’s native operating system.

Our results were better than expected. The animation was buttery smooth right from the start. When we saw that, there were cheers and high fives all around. It was as though the final puzzle piece to years of work had fallen into place.

Epic’s CEO, Tim Sweeney, commented that he’d known it’d be possible to run such games in a browser, but he didn’t think it would come so soon. A few weeks later, we demonstrated some Unreal Engine games at the 2013 Game Developer Conference, showing for the first time a major game engine running well in a browser without the aid of plug-ins.

But we didn’t want asm.js or the optimizations used to run it to be exclusively for Firefox users. So we published the full specification of asm.js and actively encouraged other browser makers to add the same optimizations we were using. From 2013 to 2015, all four of the main browser engines improved performance dramatically in the way they ran asm.js, with the Microsoft Edge browser importing parts of Firefox’s open-source asm.js-optimization code.

Soon games translated by Emscripten into asm.js started appearing online. Popular Facebook games like Papa Pear Saga, Top Eleven, Cloud Raiders, and Candy Crush Saga use asm.js under the hood. And as predicted, once games demonstrated what was possible, many other uses followed.

Facebook, for example, now uses asm.js to compress users’ images in the browser before upload, saving bandwidth. Adobe compiled a core image-editing library written in C++ to asm.js for the Web version of Lightroom. And Wikipedia uses asm.js to play video formats when the browser doesn’t provide built-in support. Other Web uses now include computer vision, 3D mapping, user-interface design, language detection, audio mixing, digital signal processing, medical imaging, physical simulation, encryption, compression, and computer algebra.

These developments were wonderful, but it eventually became clear that to move forward, we had to create a new Web standard that would be more efficient to load than asm.js code. In particular, the new standard would replace spelled-out names for variables and instructions with a much more compact representation: binary numbers. The new standard could also allow us to address some smaller issues that users had found with asm.js—they wanted features like 64-bit integer arithmetic and the ability to break an application into smaller chunks that could be downloaded and optimized separately. Thus, the idea for WebAssembly was born.

At the outset of Mozilla’s games program, there were competing proposals for standards of this sort, but none were quite right. You might think that one more proposal added to the fray would also go nowhere. But with asm.js already being widely used, we had the unique opportunity to capture that momentum and channel it into WebAssembly. If some browsers took a while to support WebAssembly (“wasm”) code, it would be easy for developers to produce both asm.js and wasm, just by flipping a switch in Emscripten. So they could always use asm.js, which ran everywhere, and send wasm only to browsers that could run it.

The next step was to convince the other browser makers that WebAssembly was a good idea. In some cases, this was surprisingly easy, because engineers at these companies had already been contemplating similar ideas themselves. Other cases involved long discussions over beers, flights to meet teams and convince managers, and a very kumbaya sit-down at San Francisco’s Yerba Buena Gardens during a Game Developer Conference. By 2015, everyone was finally on board and ready to officially embark on creating a new Web standard.

That process began with the creation of a World Wide Web Consortium (W3C) Community Group made up of engineers from the four major browser companies and other interested parties. We agreed that it would be unwise to try to solve every problem at once, because it would then take five years or more just to produce the specification. So from the outset our group adopted the goal of specifying and shipping what startup mavens call a “minimum viable product,” or MVP. We could then iteratively improve it based on feedback.

This brings us to the present. Browser companies have agreed on an initial MVP version of WebAssembly and have released compatible implementations. Emscripten can take code written in C++ and convert it directly into WebAssembly. And there will be ways in time to run other languages as well, including Rust, Lua, Python, Java, and C#. With WebAssembly, multimillion-line code bases can now load in a few seconds and then run at 80 percent of the speed of native programs. And both load time and execution speed are expected to improve as the browser engines that run the code are made better.

The WebAssembly Community Group has big plans. We are currently working to add features that can exploit the parallel processing that is possible with multicore CPUs. And we want to provide first-class tooling and performance for many languages, thereby giving developers the same freedom they have when they are writing code for native platforms.

Looking back to the original dream of allowing the Web to run all manner of programs just as well as if they had been installed locally, my colleagues and I can see there is still a lot of work left to do. But with WebAssembly, we’re happy to be one giant step closer to that goal.

This article appears in the December 2017 print issue as “Turbocharging the Web.”

About the Author

Luke Wagner is a research engineer at Mozilla.

Advertisement