Global data center capacity is projected to triple by 2030, demanding an estimated $6.7 trillion in investments across the compute power value chain. Yet, behind the surge in AI applications, cloud migrations, and hyperscale expansions, lies a critical question — who will pay for it all?
Infrastructure investors are increasingly supplying the vast capital needed to meet soaring computing demands, showing strong appetite for funding the data center buildout required to support projected load growth.
In 2024 alone, private investment in the sector totaled $108 billion, and this momentum shows no signs of slowing. This year, infrastructure investors have continued to raise significant funds dedicated solely to data centers.
For example, Blue Owl Capital launched a $7 billion vehicle targeting data center and connectivity assets worldwide with an emphasis on hyperscale facilities. Similarly, Ares Management is backing a $2.4 billion fund aimed at data center development in Japan, while PGIM has raised $2 billion for a global fund focused on the sector. Many other investors, too, are directing substantial capital into the data center opportunity set.
As assets, data centers share many hallmarks of traditional infrastructure. They can offer long-term contracted cash flows, benefit from high barriers to entry, and require substantial capital outlays.
That said, these facilities carry nuances across regions, geographies, end-tenants, and revenue profiles with varying implications on end unit economics. Each segment of the market must be evaluated through a distinct lens, shaped by its underlying value drivers and challenges.
For example, in developed markets like North America and Western Europe, where power infrastructure is relatively stable, energy efficiency and sustainability metrics often define a data centers’ competitive advantage. In contrast, in emerging markets with less stable grids, reliability metrics may take priority.
Energy efficiency targets also vary across market segments. Leading-edge hyperscale data centers are expected to deliver higher energy efficiency performance than, for example, older facilities. Similarly, sites in temperate climates typically achieve better scores than those in hotter regions, due to greater cooling requirements.
Further, the KPIs investors prioritize can vary by strategy. Growth-focused investors may emphasize capacity expansion or efficiency gains from emerging cooling technologies, while core infrastructure investors with longer horizons may focus more on reliability metrics that underpin stable, long-term returns.
All is to say, building a successful data center portfolio requires the ability to track a broad range of operational data alongside financials, allowing investors to contextualize performance, identify value-creation opportunities, deliver actionable insights to investment and management teams, and ultimately drive stronger portfolio returns.
While a range of factors influence data center economics, capacity, efficiency, and reliability are key valuation drivers. For investors navigating the data center ecosystem, tracking metrics across these pillars is critical.
Data center capacity metrics offer insight into both current operational efficiency and future growth potential. Key metrics to track include, but are not limited to:
Rack density, defined as the amount of power consumed (kW) per rack or cabinet, plays a significant role in impacting data center unit economics. For one, greater rack density enables more computing power within a smaller footprint, maximizing existing capacity and making more efficient use of available space.
High-density racks are also needed to support advanced AI and machine learning workloads that demand 30–50 kW per rack — far above the typical 5–8 kW. Therefore, being able to support these demands can provide access to hyperscale tenants, many of whom are willing to pay a premium for facilities equipped to handle these workloads.
This said, while higher rack density can improve space utilization and attract blue-chip clients, it often comes with increased energy costs due to the combined demands of IT equipment and enhanced cooling requirements.
Stranded power capacity per rack in data centers refers to power capacity delivered to a rack (for example, designed for 10kW), but only partially used (say, 5kW), leaving the remaining 5kW “stranded” and unusable.
Stranded capacity decreases operating efficiency, as provisioned power and cooling is not returned through IT workload growth or revenue. This wastes resources and increases the per-unit costs of delivering compute, storage, and network services.
A data center’s occupancy rate measures the percentage of available space, racks, cabinets, or compute capacity that is successfully leased and actively used by tenants, compared to the total available inventory in the portfolio. Tracking the amount of floor space still available for new cabinets or leases is also helpful for gaining a clear view of a facility’s remaining expansion potential.
Higher occupancy translates into more revenue-generating capacity, boosting top-line income and enabling assets to command premium pricing due to scarce available space. Further, occupancy rates determine how fixed and variable costs are distributed across tenants. When costs are spread across more paying customers, the cost per unit (such as per rack or per square foot) drops, boosting overall profit margins.
Alongside capacity, reliability is an equally critical driver for data centers, as downtime leads to significant financial losses, reputational damage, and disrupted operations. Some of the core reliability metrics data center investors should track include but are not limited to:
Data center uptime tracks the percentage of time facilities and IT systems remain fully operational without interruption. The industry benchmark is ‘five-nines’ reliability — or 99.999% uptime — which translates to just over five minutes of downtime per year. For reference, 99% uptime would mean nearly four full days offline.
Maintaining this level of uptime is critical for data center clients, as even brief outages can trigger significant losses. For example, enterprise downtime averages about $9,000 per minute, meaning a single hour-long outage can cost more than $500,000. Strong uptime performance is therefore critical to prevent revenue loss, retain customers, attract new business, and secure stronger lease terms.
Mean time to recovery (MTTR) refers to the average time it takes to restore a system, service, or piece of equipment to full operational status after a failure or outage occurs. Meeting or exceeding the MTTR outlined in service level agreements (SLAs) helps avoid costly penalties or compensation to customers for service disruptions, protecting profit margins.
Efficiency is also a crucial factor influencing the economics of data centers, directly impacting operational costs and in turn, profitability.
Energy costs can account for as much as half of a data center’s total operating expenses. Power usage efficiency (PUE) — the ratio of total facility energy consumption to the energy used by IT equipment — is the industry standard for measuring data center energy efficiency.
A PUE of 1.0 indicates perfect efficiency — all of your energy intake goes directly to your IT equipment. While some of the newest, cutting-edge hyperscale data centers are beginning to reach this level of efficiency, most modern facilities still report average PUEs in the 1.4–1.6 range. Driving better PUE scores via cooling and airflow optimization, lowering energy costs, and other strategies can directly boost cash flow and, in turn, valuations.
Temperature per cabinet is one KPI used to measure the temperature of the air entering or surrounding each server cabinet or rack, holding several downstream implications. For example, high-temperature zones or hot spots can strain equipment reliability, increase failure risks, and can lead to downtime or performance degradation.
Cooling often accounts for up to 40-50% of a data center’s total energy consumption. The cooling efficiency ratio (CER) assesses how effectively a cooling system removes heat from the data center per unit of electrical energy it uses.
For example, if a data center cooling system removes 100 kW of heat but consumes 50 kW of electrical energy, the CER would be 2.0, meaning it removes twice the heat energy compared to the electricity consumed.
A high CER signals a system that can effectively remove heat while using relatively little power. A low CER, by contrast, points to a cooling system that consumes more energy than the heat it eliminates, driving up costs and undercutting efficiency.
As infrastructure investors expand their data center portfolios, Chronograph offers robust data collection, analytics, and reporting tools. Today, 4 out of the 5 largest infrastructure investors globally leverage Chronograph GP to gather the granular data needed for deeper asset and fund-level insights and more detailed reporting — all while achieving significant efficiencies by centralizing data and automating its integration into downstream use cases.
GPs can seamlessly collect any granular data center KPIs reported by portfolio companies and position them alongside asset financials. This capability is particularly valuable for those managing portfolios across diverse infrastructure sectors, where manual processes or rigid data templates often make it difficult to maintain such granular insights.
Additionally, investors can scale data collection alongside asset development. Consider a typical data center ‘platform,’ which often combines operating and greenfield assets. With Chronograph, investors can seamlessly track granular KPIs on operational assets while monitoring the progress of greenfield projects — and then transition to capturing operational data as those projects come online.
Investors can also easily capture more data as portfolio companies expand their reporting. For example, as sustainability metrics — such as carbon usage efficiency and emissions — gain greater importance amid rising regulatory scrutiny, GPs can easily collect these metrics as portfolio companies report them without time-intensive configuration.
Chronograph also provides a suite of qualitative data collection tools, enabling investors to centrally gather fields, commentary, tables, and other contextual information. For example, if a data center outage occurs, GPs could issue a survey-style data request to understand the cause, whether any SLA requirements were breached, and the resulting financial impact of the incident.
They could also gather richer insights to complement operational data. For instance, alongside tracking a cooling efficiency metric, they could ask data centers to fill out a field on what specific cooling technology they’re using. Similarly, in addition to tracking cabinet temperatures, they could ask the age of the equipment itself.
With financial, operational, and qualitative data housed in one central location, investors can more easily surface deeper analysis and insights. For example, building on the above examples, they could examine which cooling technologies are driving the strongest CER scores across the portfolio — a particularly useful insight for data centers building higher-density racks that risk a ‘density cost spiral’ without efficient cooling.
They could also identify the typical age at which equipment begins to experience higher temperatures or hot spots that drive up PUE ratings, giving CFOs portfolio-wide visibility to help management teams anticipate and plan for upgrades to maintain operational efficiency.
GPs can also automate LP reporting, integrating financials, operational metrics, and qualitative data into final reports, enabling more in-depth end outputs that would prove difficult to generate with manual processes.
For example, at a high level, investors could seamlessly aggregate portfolio exposures across the data center sector, including brownfield, greenfield, and geographic breakdowns, while at the company level, they can pull in underlying revenue or contract models for individual data centers.
Teams can also seamlessly craft a compelling narrative around both an asset’s historical performance and its forward-looking potential for buyers at exit. For example, investors can seamlessly pull an asset’s uptime or occupancy history to demonstrate reliability, or highlight how recent sustainability initiatives are improving PUE ratings — illustrating the opportunity for further efficiency gains.
Explore our Private Infrastructure Report to see how investors are investing across the data center landscape, the different approaches they’re taking to mitigating power bottlenecks, and more.
Get updates in your inbox
Learn how Chronograph can streamline your private capital investment monitoring and diligence