The Femtojoule Promise of Analog AI

To cut power by orders of magnitude, do your processing with analog circuits

12 min read
A list of characters and letters in blue, white black and yellow.

Machine learning and artificial intelligence (AI) have already penetrated so deeply into our life and work that you might have forgotten what interactions with machines used to be like. We used to ask only for precise quantitative answers to questions conveyed with numeric keypads, spreadsheets, or programming languages: "What is the square root of 10?" "At this rate of interest, what will be my gain over the next five years?"

But in the past 10 years, we've become accustomed to machines that can answer the kind of qualitative, fuzzy questions we'd only ever asked of other people: "Will I like this movie?" "How does traffic look today?" "Was that transaction fraudulent?"


Deep neural networks (DNNs), systems that learn how to respond to new queries when they're trained with the right answers to very similar queries, have enabled these new capabilities. DNNs are the primary driver behind the rapidly growing global market for AI hardware, software, and services, valued at US $327.5 billion this year and expected to pass $500 billion in 2024, according to the International Data Corporation.

Convolutional neural networks first fueled this revolution by providing superhuman image-recognition capabilities. In the last decade, new DNN models for natural-language processing, speech recognition, reinforcement learning, and recommendation systems have enabled many other commercial applications.

But it's not just the number of applications that's growing. The size of the networks and the data they need are growing, too. DNNs are inherently scalable—they provide more reliable answers as they get bigger and as you train them with more data. But doing so comes at a cost. The number of computing operations needed to train the best DNN models grew 1 billionfold between 2010 and 2018, meaning a huge increase in energy consumption And while each use of an already-trained DNN model on new data—termed inference—requires much less computing, and therefore less energy, than the training itself, the sheer volume of such inference calculations is enormous and increasing. If it's to continue to change people's lives, AI is going to have to get more efficient.

We think changing from digital to analog computation might be what's needed. Using nonvolatile memory devices and two fundamental physical laws of electrical engineering, simple circuits can implement a version of deep learning's most basic calculations that requires mere thousandths of a trillionth of a joule (a femtojoule). There's a great deal of engineering to do before this tech can take on complex AIs, but we've already made great strides and mapped out a path forward.

AI’s Fundamental Function

Column of three yellow dots connected to a blue dot with an arrow pointing to an output.

The most basic computation in an artificial neural network is called multiply and accumulate. The output of artificial neurons [left, yellow] are multiplied by the weight values connecting them to the next neuron [center, light blue]. That neuron sums its inputs and applies an output function. In analog AI, the multiply function is performed by Ohm's Law, where the neuron's output voltage is multiplied by the conductance representing the weight value. The summation at the neuron is done by Kirchhoff's Current Law, which simply adds all the currents entering a single node

The biggest time and energy costs in most computers occur when lots of data has to move between external memory and computational resources such as CPUs and GPUs. This is the "von Neumann bottleneck," named after the classic computer architecture that separates memory and logic. One way to greatly reduce the power needed for deep learning is to avoid moving the data—to do the computation out where the data is stored.

DNNs are composed of layers of artificial neurons. Each layer of neurons drives the output of those in the next layer according to a pair of values—the neuron's "activation" and the synaptic "weight" of the connection to the next neuron.

Most DNN computation is made up of what are called vector-matrix-multiply (VMM) operations—in which a vector (a one-dimensional array of numbers) is multiplied by a two-dimensional array. At the circuit level these are composed of many multiply-accumulate (MAC) operations. For each downstream neuron, all the upstream activations must be multiplied by the corresponding weights, and these contributions are then summed.

Most useful neural networks are too large to be stored within a processor's internal memory, so weights must be brought in from external memory as each layer of the network is computed, each time subjecting the calculations to the dreaded von Neumann bottleneck. This leads digital compute hardware to favor DNNs that move fewer weights in from memory and then aggressively reuse these weights.

A radical new approach to energy-efficient DNN hardware occurred to us at IBM Research back in 2014. Together with other investigators, we had been working on crossbar arrays of nonvolatile memory (NVM) devices. Crossbar arrays are constructs where devices, memory cells for example, are built in the vertical space between two perpendicular sets of horizontal conductors, the so-called bitlines and the wordlines. We realized that, with a few slight adaptations, our memory systems would be ideal for DNN computations, particularly those for which existing weight-reuse tricks work poorly. We refer to this opportunity as "analog AI," although other researchers doing similar work also use terms like "processing-in-memory" or "compute-in-memory."

There are several varieties of NVM, and each stores data differently. But data is retrieved from all of them by measuring the device's resistance (or, equivalently, its inverse—conductance). Magnetoresistive RAM (MRAM) uses electron spins, and flash memory uses trapped charge. Resistive RAM (RRAM) devices store data by creating and later disrupting conductive filamentary defects within a tiny metal-insulator-metal device. Phase-change memory (PCM) uses heat to induce rapid and reversible transitions between a high-conductivity crystalline phase and a low-conductivity amorphous phase.

Flash, RRAM, and PCM offer the low- and high-resistance states needed for conventional digital data storage, plus the intermediate resistances needed for analog AI. But only RRAM and PCM can be readily placed in a crossbar array built in the wiring above silicon transistors in high-performance logic, to minimize the distance between memory and logic.

We organize these NVM memory cells in a two-dimensional array, or "tile." Included on the tile are transistors or other devices that control the reading and writing of the NVM devices. For memory applications, a read voltage addressed to one row (the wordline) creates currents proportional to the NVM's resistance that can be detected on the columns (the bitlines) at the edge of the array, retrieving the stored data.

To make such a tile part of a DNN, each row is driven with a voltage for a duration that encodes the activation value of one upstream neuron. Each NVM device along the row encodes one synaptic weight with its conductance. The resulting read current is effectively performing, through Ohm's Law (in this case expressed as "current equals voltage times conductance"), the multiplication of excitation and weight. The individual currents on each bitline then add together according to Kirchhoff's Current Law. The charge generated by those currents is integrated over time on a capacitor, producing the result of the MAC operation.

These same analog in-memory summation techniques can also be performed using flash and even SRAM cells, which can be made to store multiple bits but not analog conductances. But we can't use Ohm's Law for the multiplication step. Instead, we use a technique that can accommodate the one- or two-bit dynamic range of these memory devices. However, this technique is highly sensitive to noise, so we at IBM have stuck to analog AI based on PCM and RRAM.

Unlike conductances, DNN weights and activations can be either positive or negative. To implement signed weights, we use a pair of current paths—one adding charge to the capacitor, the other subtracting. To implement signed excitations, we allow each row of devices to swap which of these paths it connects with, as needed.

Nonvolatile Memories for Analog AI

Layers of colored lines with red dots in the 2nd row which is blue.

​Phase-change memory's conductance is set by the transition between a crystalline and an amorphous state in a chalcogenide glass. In resistive RAM, conductance depends on the creation and destruction of conductive filaments in an insulator.

Three rows of color with the middle tan layer having white dots labeled \u201cVacancy.\u201d

In resistive RAM, conductance depends on the creation and destruction of conductive filaments in an insulator.

Rows of colors with red dots.

Flash memory stores data as charge trapped in a "floating gate." The presence or absence of that charge modifies conductances across the device.

Rows of colors with plus and minus icons and a label that says \u201cElectrochemical RAM\u201d

Electrochemical RAM acts like a miniature battery. Pulses of voltage on a gate electrode modulate the conductance between the other two terminals by the exchange of ions through a solid electrolyte.

With each column performing one MAC operation, the tile does an entire vector-matrix multiplication in parallel. For a tile with 1,024 × 1,024 weights, this is 1 million MACs at once.

In systems we've designed, we expect that all these calculations can take as little as 32 nanoseconds. Because each MAC performs a computation equivalent to that of two digital operations (one multiply followed by one add), performing these 1 million analog MACs every 32 nanoseconds represents 65 trillion operations per second.

We've built tiles that manage this feat using just 36 femtojoules of energy per operation, the equivalent of 28 trillion operations per joule. Our latest tile designs reduce this figure to less than 10 fJ, making them 100 times as efficient as commercially available hardware and 10 times better than the system-level energy efficiency of the latest custom digital accelerators, even those that aggressively sacrifice precision for energy efficiency.

It's been important for us to make this per-tile energy efficiency high, because a full system consumes energy on other tasks as well, such as moving activation values and supporting digital circuitry.

There are significant challenges to overcome for this analog-AI approach to really take off. First, deep neural networks, by definition, have multiple layers. To cascade multiple layers, we must process the VMM tile's output through an artificial neuron's activation—a nonlinear function—and convey it to the next tile. The nonlinearity could potentially be performed with analog circuits and the results communicated in the duration form needed for the next layer, but most networks require other operations beyond a simple cascade of VMMs. That means we need efficient analog-to-digital conversion (ADC) and modest amounts of parallel digital compute between the tiles. Novel, high-efficiency ADCs can help keep these circuits from affecting the overall efficiency too much. Recently, we unveiled a high-performance PCM-based tile using a new kind of ADC that helped the tile achieve better than 10 trillion operations per watt.

A second challenge, which has to do with the behavior of NVM devices, is more troublesome. Digital DNNs have proven accurate even when their weights are described with fairly low-precision numbers. The 32-bit floating-point numbers that CPUs often calculate with are overkill for DNNs, which usually work just fine and with less energy when using 8-bit floating-point values or even 4-bit integers. This provides hope for analog computation, so long as we can maintain a similar precision.

Given the importance of conductance precision, writing conductance values to NVM devices to represent weights in an analog neural network needs to be done slowly and carefully. Compared with traditional memories, such as SRAM and DRAM, PCM and RRAM are already slower to program and wear out after fewer programming cycles. Fortunately, for inference, weights don't need to be frequently reprogrammed. So analog AI can use time-consuming write-verification techniques to boost the precision of programming RRAM and PCM devices without any concern about wearing the devices out.

That boost is much needed because nonvolatile memories have an inherent level of programming noise. RRAM's conductivity depends on the movement of just a few atoms to form filaments. PCM's conductivity depends on the random formation of grains in the polycrystalline material. In both, this randomness poses challenges for writing, verifying, and reading values. Further, in most NVMs, conductances change with temperature and with time, as the amorphous phase structure in a PCM device drifts, or the filament in an RRAM relaxes, or the trapped charge in a flash memory cell leaks away.

There are some ways to finesse this problem. Significant improvements in weight programming can be obtained by using two conductance pairs. Here, one pair holds most of the signal, while the other pair is used to correct for programming errors on the main pair. Noise is reduced because it gets averaged out across more devices.

We tested this approach recently in a multitile PCM-based chip, using both one and two conductance pairs per weight. With it, we demonstrated excellent accuracy on several DNNs, even on a recurrent neural network, a type that's typically sensitive to weight programming errors.

Vector-Matrix Multiplication with Analog AI

Column of colored dots connected by blue lines.

Vector-matrix multiplication (VMM) is the core of a neural network's computing [top]; it is a collection of multiply-and-accumulate processes. Here the activations of artificial neurons [yellow] are multiplied by the weights of their connections [light blue] to the next layer of neurons [green].

Rows of white square with yellow, blue and green dots around the outside.

For analog AI, VMM is performed on a crossbar array tile [center]. At each cross point, a nonvolatile memory cell encodes the weight as conductance. The neurons' activations are encoded as the duration of a voltage pulse. Ohm's Law dictates that the current along each crossbar column is equal to this voltage times the conductance. Capacitors [not shown] at the bottom of the tile sum up these currents. A neural network's multiple layers are represented by converting the output of one tile into the voltage duration pulses needed as the input to the next tile [right].

Different techniques can help ameliorate noise in reading and drift effects. But because drift is predictable, perhaps the simplest is to amplify the signal during a read with a time-dependent gain that can offset much of the error. Another approach is to use the same techniques that have been developed to train DNNs for low-precision digital inference. These adjust the neural-network model to match the noise limitations of the underlying hardware.

As we mentioned, networks are becoming larger. In a digital system, if the network doesn't fit on your accelerator, you bring in the weights for each layer of the DNN from external memory chips. But NVM's writing limitations make that a poor decision. Instead, multiple analog AI chips should be ganged together, with each passing the intermediate results of a partial network from one chip to the next. This scheme incurs some additional communication latency and energy, but it's far less of a penalty than moving the weights themselves.

Until now, we've only been talking about inference—where an already-trained neural network acts on novel data. But there are also opportunities for analog AI to help train DNNs.

DNNs are trained using the backpropagation algorithm. This combines the usual forward inference operation with two other important steps—error backpropagation and weight update. Error backpropagation is like running inference in reverse, moving from the last layer of the network back to the first layer; weight update then combines information from the original forward inference run with these backpropagated errors to adjust the network weights in a way that makes the model more accurate.

The Tiki-Taka Solution

Analog AI can reduce the power consumption of training neural networks, but because of some inherent characteristics of the nonvolatile memories involved, there are some complications. Nonvolatile memories, such as phase-change memory and resistive RAM, are inherently noisy. What's more, their behavior is asymmetric. That is, at most points on their conductance curve, the same value of voltage will produce a different change in conductance depending on the voltage's polarity.

One solution we came up with, the Tiki-Taka algorithm, is a modification to backpropagation training. Crucially, it is significantly more robust to noise and asymmetric behavior in the NVM conductance. This algorithm depends on RRAM devices constructed to conduct in both directions. Each of these is initialized to their symmetry point—the spot on their conductance curve where the conductance increase and decrease for a given voltage are exactly balanced. In Tiki-Taka, the symmetry-point-balanced NVM devices are involved in weight updates to train the network. Periodically, their conductance values are programmed onto a second set of devices, and the training devices are returned to their natural symmetry point. This allows the neural network to train to high accuracy, even in the presence of noise and asymmetry that would completely disrupt the conventional backpropagation algorithm.

The backpropagation step can be done in place on the tiles but in the opposite manner of inferencing—applying voltages to the columns and integrating current along rows. Weight update is then performed by driving the rows with the original activation data from the forward inference, while driving the columns with the error signals produced during backpropagation.

Training involves numerous small weight increases and decreases that must cancel out properly. That's difficult for two reasons. First, recall that NVM devices wear out with too much programming. Second, the same voltage pulse applied with opposite polarity to an NVM may not change the cell's conductance by the same amount; its response is asymmetric. But symmetric behavior is critical for backpropagation to produce accurate networks. This is only made more challenging because the magnitude of the conductance changes needed for training approaches the level of inherent randomness of the materials in the NVMs.

There are several approaches that can help here. For example, there are various ways to aggregate weight updates across multiple training examples, and then transfer these updates onto NVM devices periodically during training. A novel algorithm we developed at IBM, called Tiki-Taka, uses such techniques to train DNNs successfully even with highly asymmetric RRAM devices. Finally, we are developing a device called electrochemical random-access memory (ECRAM) that can offer not just symmetric but highly linear and gradual conductance updates.

The success of analog AI will depend on achieving high density, high throughput, low latency, and high energy efficiency—simultaneously. Density depends on how tightly the NVMs can be integrated into the wiring above a chip's transistors. Energy efficiency at the level of the tiles will be limited by the circuitry used for analog-to-digital conversion.

But even as these factors improve and as more and more tiles are linked together, Amdahl's Law—an argument about the limits of parallel computing—will pose new challenges to optimizing system energy efficiency. Previously unimportant aspects such as data communication and the residual digital computing needed between tiles will incur more and more of the energy budget, leading to a gap between the peak energy efficiency of the tile itself and the sustained energy efficiency of the overall analog-AI system. Of course, that's a problem that eventually arises for every AI accelerator, analog or digital.

The path forward is necessarily different from digital AI accelerators. Digital approaches can bring precision down until accuracy falters. But analog AI must first increase the signal-to-noise ratio (SNR) of the internal analog modules until it is high enough to demonstrate accuracy equivalent to that of digital systems. Any subsequent SNR improvements can then be applied toward increasing density and energy efficiency.

These are exciting problems to solve, and it will take the coordinated efforts of materials scientists, device experts, circuit designers, system architects, and DNN experts working together to solve them. There is a strong and continued need for higher energy-efficiency AI acceleration, and a shortage of other attractive alternatives for delivering on this need. Given the wide variety of potential memory devices and implementation paths, it is quite likely that some degree of analog computation will find its way into future AI accelerators.

This article appears in the December 2021 print issue as "Ohm's Law + Kirchhoff's Current Law = Better AI."

{"imageShortcodeIds":[]}
The Conversation (2)
Regina Angarita30 Nov, 2021
INDV

density+ throughput + latency

Ian Huang23 Nov, 2021
INDV

PCM is the most energy-intensive form of memory I know of, due to the high energy requirement for the joule heating needed to change the chalcogenide phase. Does analog AI still theoretically beat digital AI in terms of power consumption when using PCM? I would expect this to only happen if writes to memory are very few and far between.

Convincing Consumers To Buy EVs

How range, affordability, reliability, and behavioral changes figure into purchase decisions

15 min read
A collage showing four current electric vehicles. The EV's shown are: Mercedes-EQE SUV, Hyundai IONIQ 5, CHEVROLET EQUINOX EV 3LT, and Lucid Air.

Four EVs, from economy to luxury, currently for sale in the U.S. From top left clock wise: The Mercedes-EQE SUV, Hyundai IONIQ 5, CHEVROLET EQUINOX EV 3LT, and Lucid Air.

Credits: Mercedes-Benz Group AG; Hyundai Motor America; Chevrolet; Lucid.

With the combination of requiring all new light-duty vehicles sold in New York State be zero-emission by 2035, investments in electric vehicles charging stations, and state and federal EV rebates, “you’re going to see that you have no more excuses” for not buying an EV, according to New York Governor Kathy Hochul.

The EV Transition Explained

This is the tenth in a series of articles exploring the major technological and social challenges that must be addressed as we move from vehicles with internal-combustion engines to electric vehicles at scale. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

Perhaps, but getting the vast majority of 111 million US households who own one or more light duty internal combustion vehicles to switch to EVs is going to take time. Even if interest in purchasing an EV is increasing, close to 70 percent of Americans are still leaning towards buying an ICE vehicles as their next purchase. In the UK, only 14 percent of drivers plan to purchase an EV as their next car.

Even when there is an expressed interest in purchasing a battery electric or hybrid vehicle, it often did not turn into an actual purchase. A 2022 CarGurus survey found that 35 percent of new car buyers expressed an interest in purchasing a hybrid, but only 13 percent eventually did. Similarly, 22 percent expressed interest in a battery electric vehicle (BEV), but only 5 percent bought one.

Each potential EV buyer assesses their individual needs against the benefits and risks an EV offers. However, until mainstream public confidence reaches the point where the perceived combination of risks of a battery electric vehicle purchase (range, affordability, reliability and behavioral changes) match that of an ICE vehicle, then EV purchases are going to be the exception rather than the norm.

How much range is enough?

Studies differ about how far drivers want to be able to go between charges. One Bloombergstudy found 341 miles was the average range desired, while Deloitte Consulting’s2022 Global Automotive Consumer Study found U.S. consumers want to be able to travel 518 miles on a fully charged battery in a BEV that costs $50,000 or less.

Arguments over how much range is needed are contentious. There are some who argue that because 95 percent of American car trips are 30 miles or less, a battery range of 250 miles or less is all that is needed. They also point out that this would reduce the price of the EV, since batteries account for about 30 percent of an EVs total cost. In addition, using smaller batteries would allow more EVs to be built, and potentially relieve pressure on the battery supply chain. If longer trips are needed, well, “bring some patience and enjoy the charging experience” seems to be the general advice.

While perhaps logical, these arguments are not going to influence typical buying decisions much. The first question potential EV buyers are going to ask themselves is, “Am I going to be paying more for a compromised version of mobility?” says Alexander Edwards, President of Strategic Vision, a research-based consultancy that aims to understand human behavior and decision-making.


 Driver\u2019s side view of 2024 Chevrolet Equinox EV 3LT in Riptide Blue driving down a roadDriver’s side view of 2024 Chevrolet Equinox EV 3LT.Chevrolet

Edwards explains potential customers do not have range anxietyper se: If they believe they require a vehicle that must go 400 miles before stopping, “even if once a month, once a quarter, or once a year,” all vehicles that cannot meet that criteria will be excluded from their buying decision. Range anxiety, therefore, is more a concern for EV owners. Edwards points out that regarding range, most BEV owners own at least one ICE vehicle to meet their long-distance driving needs.

What exactly is the “range” of a BEV is itself becoming a heated point of contention. While ICE vehicles driving ranges are affected by weather and driving conditions, the effects are well-understood after decades of experience. This experience is lacking with non-EV owners. Extreme heat and cold negatively affect EV battery ranges and charging time, as do driving speeds and terrain.

Peter Rawlinson serves as the Chief Executive Officer and Chief Technology Officer of Lucid.Peter Rawlinson serves as the CEO and CTO of Lucid.Lucid

Some automakers are reticent to say how much range is affected under differing conditions. Others, like Ford’s CEO Jim Farley, freely admits, “If you’re pulling 10,000 pounds, an electric truck is not the right solution. And 95 percent of our customers tow more than 10,000 pounds.” GM, though, is promising it will meet heavier towing requirements with its 2024 Chevrolet Silverado EV. However, Lucid Group CEO Peter Rawlinson in a non-too subtle dig at both Ford and GM said, “The correct solution for an affordable pickup truck today is the internal combustion engine.”

Ford’s Farley foresees that the heavy-duty truck segment will be sticking with ICE trucks for a while, as “it will probably go hydrogen fuel cell before it goes pure electric.” Many in the auto industry are warning that realistic BEV range numbers under varying conditions need to be widely published, else risk creating a backlash against EVs in general.

Range risk concerns obviously are tightly coupled to EV charging availability. Most charging is assumed to take place at home, but this is not an option for many home or apartment tenants. Even those with homes, their garages may not be available for EV charging. Scarce and unreliable EV charging opportunities, as well as publicized EV road trip horror stories, adds to both the potential EV owners’ current perceived and real range satisfaction risk.

EVs ain’t cheap

Price is another EV purchase risk that is comparable to EV range. Buying a new car is the second most expensive purchase a consumer makes behind buying a house. Spending nearly 100 percent of an annual US median household income on an unfamiliar technology is not a minor financial ask.

That is one reason why legacy automakers and EV start-ups are attempting to follow Tesla’s success in the luxury vehicle segment, spending much of their effort producing vehicles that are “above the median average annual US household income, let alone buyer in new car market,” Strategic Vision’s Edwards says. On top of the twenty or so luxury EVs already or soon to be on the market, Sony and Honda recently announced that they would be introducing yet another luxury EV in 2026.

It is true that there are some EVs that will soon appear in the competitive price range of ICE vehicles like the low-end GM EV Equinox SUV presently priced around $30,000 with a 280-mile range. How long GM will be able to keep that price in the face of battery cost increases and inflationary pressure, is anyone’s guess. It has already started to increase the cost of its Chevrolet Bolt EVs, which it had slashed last year, “due to ongoing industry-related pricing pressures.”

An image of a Lucid  Air electric vehicle.The Lucid Air’s price ranges from $90,000 to $200,000 depending on options.Lucid.

Analysts believe Tesla intends to spark an EV price war before its competitors are ready for one. This could benefit consumers in the short-term, but could also have long-term downside consequences for the EV industry as a whole. Tesla fired its first shot over its competitors’ bows with a recently announced price cut from $65,990 to $52,990 for its basic Model Y, with a range of 330 miles. That makes the Model Y cost-competitive with Hyundai’s $45,500 IONIQ 5 e-SUV with 304 miles of range.

Tesla’s pricing power could be hard to counter, at least in the short term. Ford’s cheapest F-150 Lightning Pro is now $57,869 compared to $41,769 a year ago due to what Ford says are “ongoing supply chain constraints, rising material costs and other market factors.” The entry level F-150 XL with an internal combustion engine has risen in the past year from about $29,990 to $33,695 currently.

Carlos TavaresChief Executive OfficerExecutive Director of StellantisCarlos Tavares, CEO of Stellantis.Stellantis

Automakers like Stellantis, freely acknowledge that EVs are too expensive for most buyers, with Stellantis CEO Carlos Tavares even warning that if average consumers can’t afford EVs as ICE vehicle sales are banned, “There is potential for social unrest.” However, other automakers like BMW are quite unabashed about going after the luxury market which it terms “white hot.” BMW’s CEO Oliver Zipse does say the company will not leave the “lower market segment,” which includes the battery electric iX1 xDrive30 that retails for A$82,900 in Australia and slightly lower elsewhere. It is not available in the United States.

Mercedes-Benz CEO Ola Kallenius also believes luxury EVs will be a catalyst for greater EV adoption—eventually. But right now, 75 percent of its investment has been redirected at bringing luxury vehicles to market.

The fact that luxury EVs are more profitable no doubt helps keep automakers focused on that market. Ford’s very popular Mustang Mach-E is having trouble maintaining profitability, for instance, which has forced Ford to raise its base price from $43,895 to $46,895. Even in the Chinese market where smaller EV sales are booming, profits are not. Strains on profitability for automakers and their suppliers may increase further as battery metals prices increase, warns data analysis company S&P Global Mobility.

Jim Rowan, Volvo Cars' new CEO and President as of 21 March 2022Jim Rowan, Volvo Cars’ CEO and President.Volvo Cars

As a result, EVs are unlikely to match ICE vehicle prices (or profits) anytime soon even for smaller EV models, says Renault Group CEO Luca de Meo, because of the ever increasing cost of batteries. Mercedes Chief Technology Officer Marcus Schäferagrees and does not see EV/ICE price parity “with the [battery] chemistry we have today.” Volvo CEO Jim Rowan, disagrees with both of them, however, seeing ICE-EV price parity coming by 2025-2026.

Interestingly, a 2019 Massachusetts Institute of Technology (MIT) study predicted that as EVs became more widespread, battery prices would climb because the demand for lithium and other battery metals would rise sharply. As a result, the study indicated EV/ICE price parity was likely closer to 2030 with the expectation that new battery chemistries would be introduced by then.

Many argue, however, that total cost of ownership (TCO) should be used as the EV purchase decision criterion rather than sticker price. Total cost of ownership of EVs is generally less than an ICE vehicle over its expected life since they have lower maintenance costs and electricity is less expensive per mile than gasoline, and tax incentives and rebates help a lot as well.

However, how long it takes to hit the break-even point depends on many factors, like the cost differential of a comparable ICE vehicle, depreciation, taxes, insurance costs, the cost of electricity/petrol in a region, whether charging takes place at home, etc. And TCO rapidly loses it selling point appeal if electricity prices go up, however, as is happening in the UK and in Germany.

Even if the total cost of ownership is lower for an EV, a potential EV customer may not be interested if meeting today’s monthly auto payments is difficult. Extra costs like needing to install a fast charger at home, which can add several thousand dollars more, or higher insurance costs, which could add an extra $500-$600 a year, may also be seen as buying impediment and can change the TCO equation.

Reliability and other major tech risks

To perhaps distract wary EV buyers from range and affordability issues, the automakers have focused their efforts on highlighting EV performance. Raymond Roth, a director at financial advisory firm Stout Risius Ross, observes among automakers, “There’s this arms race right now of best in class performance” being the dominant selling point.

This “wow” experience is being pursued by every EV automaker. Mercedes CEO Kallenius, for example, says to convince its current luxury vehicle owners to an EV, “the experience for the customer in terms of the torque, the performance, everything [must be] fantastic.” Nissan, which seeks a more mass market buyer, runs commercials exclaiming, “Don’t get an EV for the ‘E’, but because it will pin you in your seat, sparks your imagination and takes your breath away.”

Ford believes it will earn $20 billion, Stellantis some $22.5 billion and GM $20 to $25 billion from paid software-enabled vehicle features by 2030.

EV reliability issues may also take one’s breath away. Reliability is “extremely important” to new-car buyers, according to a 2022 report from Consumer Reports (CR). Currently, EV reliability is nothing to brag about. CR’s report says that “On average, EVs have significantly higher problem rates than internal combustion engine (ICE) vehicles across model years 2019 and 2020.” BEVs dwell at the bottom of the rankings.

Reliability may prove to be an Achilles heel to automakers like GM and Ford. GM CEO Mary Barra has very publicly promised that GM would no longer build “ crappy cars.” The ongoing problems with the Chevy Bolt undercuts that promise, and if its new Equinox EV has issues, it could hurt sales. Ford has reliability problems of its own, paying $4 billion in warranty costs last year alone. Its e-Mustang has been subject to several recalls over the past year. Even perceived quality-leader Toyota has been embarrassed by wheels falling off weeks after the introduction of its electric bZ4X SUV, the first in a new series “bZ”—beyond zero—electric vehicles.

A vehicle is caught up in a mudslide in Silverado Canyon, Calif., Wednesday, March 10, 2021.A Tesla caught up in a mudslide in Silverado Canyon, Calif., on March 10, 2021. Jae C. Hong/AP Photo

Troubles with vehicle electronics, which has plagued ICE vehicles as well for some time, seems even worse in EVs according to Consumer Report’s data. This should not be surprising, since EVs are packed with the latest electronic and software features to make them attractive, like new biometric capability, but they often do not work. EV start-up Lucid is struggling with a range of software woes, and software problems have pushed back launches years at Audi, Porsche and Bentley EVs, which are part of Volkswagen Group.

Another reliability risk-related issue is getting an EV repaired when something goes awry, or there is an accident. Right now, there is a dearth of EV-certified mechanics and repair shops. The UK Institute of the Motor Industry (IMI) needs 90,000 EV-trained technicians by 2030. The IMI estimates that less than 7 percent of the country’s automotive service workforce of 200,000 vehicle technicians is EV qualified. In the US, the situation is not better. The National Institute for Automotive Service Excellence (ASE), which certifies auto repair technicians, says the US has 229,000 ASE-certified technicians. However, there are only some 3,100 certified for electric vehicles. With many automakers moving to reduce their dealership networks, resolving problems that over-the-air (OTA) software updates cannot fix might be troublesome.

Furthermore, the costs and time needed to repair an EV are higher than for ICE vehicles, according to the data analytics company CCC. Reasons include a greater need to use original equipment manufacturer (OEM) parts and the cost of scans/recalibration of the advanced driver assistance systems, which have been rising for ICE vehicles as well. Furthermore, technicians need to ensure battery integrity to prevent potential fires.

And some of batteries along with their battery management systems need work. Two examples: Recalls involving the GM Bolt and Hyundai Kona, with the former likely to cost GM $1.8 billion and Hyundai $800 million to fix, according to Stout’s 2021 Automotive Defect and Recall Report. Furthermore, the battery defect data compiled by Stout indicates “incident rates are rising as production is increasing and incidents commonly occur across global platforms,” with both design and manufacturing defects starting to appear.

For a time in New York City, one had to be a licensed engineer to drive a steam-powered auto. In some aspects, EV drivers return to these roots. This might change over time, but for now it is a serious issue.” —John Leslie King

CCC data indicate that when damaged, battery packs do need replacement after a crash, and more than 50 percent of such vehicles were deemed a total loss by the insurance companies. EVs also need to revisit the repair center more times after they’ve been repaired than ICE vehicles, hinting at the increased difficulty in repairing them. Additionally, EV tire tread wear needs closer inspection than on ICE vehicles. Lastly, as auto repair centers need to invest in new equipment to handle EVs, these costs will be passed along to customers for some time.

Electric vehicle and charging network cybersecurity is also growing as a perceived risk. A 2021 survey by insurance company HSB found that an increasing number of drivers, not only of EVs but ICE vehicles, are concerned about their vehicle’s security. Some 10 percent reported “a hacking incident or other cyber-attack had affected their vehicle,” HSB reported. Reports of charging stations being compromised are increasingly common.

The risk has reached the attention of the US Office of the National Cyber Director, which recently held a forum of government and automaker, suppliers and EV charging manufacturers focusing on “cybersecurity issues in the electric vehicle (EV) and electric vehicle supply equipment (EVSE) ecosystem.” The concern is that EV uptake could falter if EV charging networks are not perceived as being secure.

A sleeper risk that may explode into a massive problem is an EV owner’s right-to-repair their vehicle. In 2020, Massachusetts passed a law that allows a vehicle owner to take it to whatever repair shop they wish and gave independent repair shops the right to access the real-time vehicle data for diagnosis purposes. Auto dealers have sued to overturn the law, and some auto makers like Subaru and Kia have disabled the advanced telematic systems in cars sold in Massachusetts, often without telling new customers about it. GM and Stellantis have also said they cannot comply with the Massachusetts law, and are not planning to do so because it would compromise their vehicles’ safety and cybersecurity. The Federal Trade Commission is looking into the right-to-repair issue, and President Biden has come out in support of it.

You expect me to do what, exactly?

Failure to change consumer behavior poses another major risk to the EV transition. Take charging. It requires a new consumer behavior in terms of understanding how and when to charge, and what to do to keep an EV battery healthy. The information on the care and feeding of a battery as well as how to maximize vehicle range can resemble a manual for owning a new, exotic pet. It does not help when an automaker like Ford tells its F-150 Lightning owners they can extend their driving range by relying on the heated seats to stay warm instead of the vehicle’s climate control system.

Keeping in mind such issues, and how one might work around them, increases a driver’s cognitive load—things that must be remembered in case they must be acted on. “Automakers spent decades reducing cognitive load with dash lights instead of gauges, or automatic instead of manual transmissions,” says University of Michigan professor emeritus John Leslie King, who has long studied human interactions with machines.

King notes, “In the early days of automobiles, drivers and chauffeurs had to monitor and be able to fix their vehicles. They were like engineers. For a time in New York City, one had to be a licensed engineer to drive a steam-powered auto. In some aspects, EV drivers return to these roots. This might change over time, but for now it is a serious issue.”


The first-ever BMW iX1 xDrive30, Mineral White metallic, 20\u201c BMW Individual Styling 869i The first-ever BMW iX1 xDrive30, Mineral White metallic, 20“ BMW Individual Styling 869i BMW AG

This cognitive load keeps changing as well. For instance, “common knowledge” about when EV owners should charge is not set in concrete. The long-standing mantra for charging EV batteries has been do so at home from at night when electricity rates were low and stress on the electric grid was low. Recent research from Stanford University says this is wrong, at least for Western states.

Stanford’s research shows that electricity rates should encourage EV charging during the day at work or at public chargers to prevent evening grid peak demand problems, which could increase by as much as 25 percent in a decade. The Wall Street Journal quotes the study’s lead author Siobhan Powell as saying if everyone were charging their EVs at night all at once, “it would cause really big problems.”

Asking EV owners to refrain from charging their vehicles at home during the night is going to be difficult, since EVs are being sold on the convenience of charging at home. Transportation Secretary Pete Buttigieg emphasized this very point when describing how great EVs are to own, “And the main charging infrastructure that we count on is just a plug in the wall.”

EV owners increasingly find public charging unsatisfying and is “one of the compromises battery electric vehicle owners have to make,” says Strategic Vision’s Alexander Edwards, “that drives 25 percent of battery electric vehicle owners back to a gas powered vehicle.” Fixing the multiple problems underlying EV charging will not likely happen anytime soon.

Another behavior change risk relates to automakers’ desired EV owner post-purchase buying behavior. Automakers see EV (and ICE vehicle) advanced software and connectivity as a gateway to a software-as-a-service model to generate new, recurring revenue streams across the life of the vehicle. Automakers seem to view EVs as razors through which they can sell software as the razor blades. Monetizing vehicle data and subscriptions could generate $1.5 trillion by 2030, according to McKinsey.

VW thinks that it will generate “triple-digit-millions” in future sales through selling customized subscription services, like offering autonomous driving on a pay-per-use basis. It envisions customers would be willing to pay 7 euros per hour for the capability. Ford believes it will earn $20 billion, Stellantis some $22.5 billion and GM $20 to $25 billion from paid software-enabled vehicle features by 2030.

Already for ICE vehicles, BMW is reportedly offering an $18 a month subscription (or $415 for “unlimited” access) for heated front seats in multiple countries, but not the U.S. as of yet. GM has started charging $1,500 for a three-year “optional” OnStar subscription on all Buick and GMC vehicles as well as the Cadillac Escalade SUV whether the owner uses it or not. And Sony and Honda have announced their luxury EV will be subscription-based, although they have not defined exactly what this means in terms of standard versus paid-for features. It would not be surprising to see it follow Mercedes’ lead. The automaker will increase the acceleration of its EQ series if an owner pays a $1,200 a year subscription fee.

Essentially, automakers are trying to normalize paying for what used to be offered as standard or even an upgrade option. Whether they will be successful is debatable, especially in the U.S. “No one is going to pay for subscriptions,” says Strategic Vision’s Edwards, who points out that microtransactions are absolutely hated in the gaming community. Automakers risk a major consumer backlash by using them.

To get to EV at scale, each of the EV-related range, affordability, reliability and behavioral changes risks will need to be addressed by automakers and policy makers alike. With dozens of new battery electric vehicles becoming available for sale in the next two years, potential EV buyers now have a much great range of options than previously. The automakers who manage EV risks best— along with offering compelling overall platform performance—will be the ones starting to claw back some of their hefty EV investments.

No single risk may be a deal breaker for an early EV adopter, but for skeptical ICE vehicle owners, each risk is another reason not to buy, regardless of perceived benefits offered. If EV-only families are going to be the norm, the benefits of purchasing EVs will need to be above—and the risks associated with owning will need to match or be below—those of today’s and future ICE vehicles.

In the next articles of this series, we’ll explore the changes that may be necessary to personal lifestyles to achieve 2050 climate goals.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}

Deploying Data Science and AI to Fight Wildlife Trafficking

NYU Tandon’s Juliana Freire is leading a team aimed at using data science to bring down criminals trafficking humans and exotic animals

5 min read
A colorful parrot behind bars

Wildlife trafficking has an unexpected new foe: computer science, data science, and machine learning.

Shutterstock

This is a sponsored article brought to you by NYU Tandon School of Engineering.

Wildlife trafficking is a lucrative market. While it’s hard to tell exactly how much money it brings in, the U.S. government estimates it’s in the billions of dollars a year. Animals and their parts are traded much like firearms or narcotics — through complex networks of suppliers, dealers, and buyers, who leave a bloody path in their wake. The destruction speaks for itself; species decimated, environments degraded, and innocent people victimized.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}