Can HPE’s “The Machine” Deliver?

A 320-terabyte prototype, expected in 2016, will showcase the company’s bid to reinvent computing

HP never shied away from big names for its computers. There are high-performance servers named Apollo and optimized computing systems called Moonshot. And then there’s The Machine.

When Hewlett-Packard Co.—now split in two—announced The Machine in Las Vegas in 2014, it presented the project as a near-complete overhaul of traditional computer architecture. Gone were the CPU-centric architecture, the slow copper communications, and the messy hierarchy of traditional memory. In their place, specialized computing cores, speedy light-carrying photonic connections, and a massive store of dense, energy-efficient memristor memory. The resulting computer, its designers say, will be efficient enough to manipulate petabyte-scale data sets in an unprecedented fashion, expanding what companies and scientists can accomplish in areas such as graph theory, predictive analytics, and deep learning in a way that could improve our daily lives.

There is nothing small about what HP promised. Now the question is what will come of the initial claims. It seems we’ll soon get a glimpse of the vision, realized in hardware; Hewlett Packard Labs (formerly HP Labs) says it hopes to unveil its first large-scale prototype of The Machine in 2016. The project is now part of Hewlett Packard Enterprise (HPE), which focuses on corporate computing (the second HP spin-off, HP Inc., sells PCs and printers).

That initial computer is a whopper by modern standards: It’s expected to be about the size of a server rack, with 320 terabytes of memory and more than 2,500 CPU cores. But one key Machine component will be missing: the memristor.

The device, one of four fundamental circuit elements, was developed at HP Labs. HP touted it as an energy-efficient replacement for DRAM, the workhorse memory that sits close to CPUs. Like flash, memristors can also act as nonvolatile storage, retaining their data even when powered down.

HP Labs looked at ways to capitalize on these properties after the memristor was announced in 2008. In the end, the device became a key component of a larger overhaul the company calls Memory-Driven Computing. Current computers have different forms of memory, each with different advantages when it comes to cost, power consumption, and speed. Memristors with various capabilities, the thinking went, could eventually replace them all.

In his first public talk about The Machine, in 2014, the then HP Labs director Martin Fink outlined a picture of what this might look like: a computer server with as much as 160 petabytes of data—roughly five times what the Large Hadron Collider produces in the course of a year—all accessible in less than 250 nanoseconds. Such capacity would be huge, says Richard Fichera, an analyst at Forrester Research, and the access time on the order of 1,000 times as fast as can be done today. But memristors have had teething pains. Although HP previously showed off wafers containing memristors, a commercial offering has yet to emerge.

When The Machine’s first prototype arrives, it will carry energy-hungry DRAM in their place. The memory will be a stand-in, the team says, that can be used to emulate later hardware and move development forward. “Everything the Machine team is doing is designed to get us to working, useful Machine prototypes as quickly as possible,” the group wrote to IEEE Spectrum. “We still believe memristors are the best candidate, but rather than waiting until memristors are ready before making prototype Machines, we’ll use more conventional memory to learn about memory fabrics and how operating systems, analytics, and applications should change now rather than waiting.” Memristors, in principle, will be slotted in later, although no time frame has been given for when that might occur.

Grand announcements about products that are still years away are rare in the computer industry. “We don’t need to talk about stuff five years from now,” Juan Loaiza, Oracle senior vice president for systems technology, told Bloomberg Businessweek.

HPE says the project will evolve, with hardware and software building upon one another in a “virtuous cycle.” Commercial offerings, ranging from credit-card size to supercomputer scale are targeted for 2020, but components of The Machine will be marketed as they become available.

“As a research vehicle, I think it’s a great idea. I just think they sort of painted themselves into a corner with a lot of early and aggressive hype,” says Forrester’s Fichera, who worked on server strategy at HP. “It’s going to be a real challenge for them to pull off some of the things they’re talking about.”

The memristor could have a big impact on success, Fichera says, as it is the one piece of Machine hardware exclusive to the company. “If they could build a huge server with a huge memory space and somehow allow these individual nodes to share that memory more efficiently, that could be a very advantageous architecture,” he said. “Unfortunately anyone else who wants to chase that same goal is going to have all of the same components to work with.”

But perhaps HPE, which sells servers, will go a different way. Last year, Intel and Micron announced a speedy new form of nonvolatile memory called 3D XPoint, aimed to ship in 2016. Jim Handy, an analyst at Objective Analysis, says he suspects those companies could well have convinced then HP that their memory is “a better or more timely solution.”

If that’s true, other companies will have access to that same memory technology. With the ample heads-up from HP in 2014, they too could be lacing up their racing shoes.

Advertisement
Advertisement