Compute Per Megawatt (CPM).
Rather than viewing behind-the-meter storage as a revenue-generating asset for data centers, CPM reframes it as one that can enable new levels of computing and unlock capacity despite constrained grid access. The battery effectively acts like a lever to improve IT performance.
“Storage doesn’t make individual GPUs more efficient, but it removes the constraints that prevent the IT side from using its hardware to full potential,” explained Alejandro de Diego, a market analyst at Modo Energy. He told ESS News that normally, IT-side techniques like power capping and workload scheduling define the amount of compute a company can pull from a given power budget. “Storage expands what that budget is.”
Part of that shift stems from the unique load profile of data centers, where thousands of processors can spike and dramatically fall in a matter of seconds. In de Diego’s eyes, that’s why energy management is where storage’s key value add lies for data centers. Without it, a project’s grid connection must match the worst possible peak in order to ensure continuous supply. That gets expensive and utilities are often hesitant to sign off on high loads.
“With a battery absorbing those spikes, operators can effectively run more compute capacity than their grid contract allows,” he added, though this does depend on spike duration and battery sizing. In practice, that makes installing storage as much of a time-to-market play as a cost-saving one as projects languish in interconnection queues and more hyperscalers turn toward batteries as a way to jump the line.
“Utilities are facing a ‘grid power wall’ and are increasingly hungry for flexible resources that can mitigate the massive load growth from AI,” said Wannie Park, the founder and CEO of software-as-a-service startup Pado AI, which manages distributed energy resources for data centers and is backed by LG NOVA. He explained to ESS News that while interconnection queues remain a challenge, the company is seeing positive returns when they can demonstrate that a data center is “grid-aware.”
“By acting as a balancing mechanism, we actually help utilities manage their constraints rather than exacerbating them,” he added.
Still, putting CPM into practice will also likely require power management on the IT side of things. Research out of Lawrence Berkeley National Laboratory and others on power capping shows that facilities can often run more processors on the same power budget by limiting the amount of power drawn by each processor. This lets the center process more AI workloads overall.
“The novelty here isn’t the storage, but that compute workloads can be deliberately reshaped to need less of it, while still increasing total output,” de Diego said. This is where co-optimization comes into play.
Once again, storage acts as a “buffer” as IT-side power management helps smooth the load. Data centers can then use smaller batteries and integrate more aggressive power caps than it would otherwise. That combination opens up a new sort of value that’s not captured by arbitrage or capacity payment models.
“That’s the additional compute revenue that storage makes possible,” he noted, saying that this is “far more revenue per MWh” for the operator. “The battery’s value is no longer determined by what the electricity is worth on the grid, but by what the computation it enables is worth. That distinction changes the investment case entirely.”
Capturing that value, however, is another question entirely. Pado’s approach? Trying to break down deeply-engrained walls between IT systems and energy infrastructure.
“Existing platforms often treat the ‘White Space’ (IT workloads) and ‘Gray Space’ (facilities and storage) as separate silos,” Park said. He added that bridging the gap between revenue-driven IT and the cost-center facility side is how a data center can go from a large load to a flexible grid asset. “Storage is the ‘enabling buffer’ that makes grid-aware compute possible, [as] it allows a data center to act as a virtual power plant without the risk of dropping a single server rack.”
Even so, for most large load users, VPP participation or wholesale market access just isn’t the point.
“If a hyperscaler deployed storage to unlock additional compute capacity and ends up earning revenue from wholesale markets instead, the original investment thesis has failed,” de Diego said, explaining that this is because the financial case for the system was based on the value of the computation it enables, not by its grid participation. While it’s not necessarily a bad thing, he noted that “the moment grid revenue becomes the main reason a hyperscaler’s battery exists, something has gone wrong with the compute story that justified it in the first place.”
Coupled with regulatory and technical barriers that make it tricky for data centers to participate in wholesale markets and export, not import, electricity, CPM is likely one of the more realistic ways for batteries to offer a new value stream to data centers.
“With the right orchestration, [storage] isn’t just a cost-center for backup,” Park said. “It is a revenue-generating asset that maximizes [a data center’s] CPM and pays for itself through energy market participation.”