Accelerating AI Data Center Security

Executive Summary

Artificial Intelligence (AI) systems are advancing at breakneck speed and already reshaping markets, geopolitics, and the priorities of governments. Frontier AI systems are developed and deployed using compute clusters of hundreds of thousands of cutting-edge AI chips housed in specialized data centers. These AI data centers are likely tempting targets for sophisticated adversaries like China and Russia, who may seek to steal intellectual property or sabotage AI systems underpinning military, industry, or critical infrastructure projects.

Now is an especially important time for Western nations to address AI data center security.¹ AI data center operators are building at an unprecedented rate, with capital spending on these facilities increasing dramatically since 2022. In the US, these companies have increased their spending on physical assets by 40% from 2022 to 2024, and are planning hundreds of billions in annual investment, with the vast majority earmarked for AI data centers. These data centers are not on track to be secure against advanced nation-state threats, and due to energy, permitting, and other constraints, many will be built outside the West, where they will be especially vulnerable to attacks. Furthermore, it is simpler, more cost-effective, and sometimes necessary to implement security features at the outset rather than retrofitting them later, after the data centers have been designed and built.

Because AI data centers are of such high strategic importance, the threats they face will be substantial. Chinese cyber operations are particularly capable. State-sponsored Chinese hacking groups have demonstrated the ability to penetrate critical US networks and infrastructure and remain undetected for months or even years. Because private firms do not bear the full societal costs of a cyber breach—including harm to national security and competitors—they are likely to underinvest in security. However, even high-security government systems are frequently breached by foreign actors.

To achieve their aims, adversaries will use a range of techniques. The most urgent areas for improvement are those that (1) are unique to AI hardware, (2) require the advanced capabilities of nation-state actors, and (3) involve vulnerabilities where the defensive responsibility is spread across multiple vendors and providers. Neglected attacker techniques include:

  • Side-channel attacks, where attackers measure electromagnetic, power, or other emissions from hardware to infer secrets such as encryption keys. Today’s AI hardware is likely poorly defended against these attacks and difficult for independent researchers to study, due to proprietary, non-public designs.

  • Hardware supply chain attacks, where attackers tamper with hardware or infrastructure such as cables, cooling, or power systems before they are installed, to insert backdoors. Hardware and components manufactured in China are especially risky, but products made elsewhere may also be susceptible.

  • AI model weight exfiltration, where attackers who have gained access transfer the very large model weight files out of the data center, either through main networks (“in-band”), management networks (“out-of-band”), or covert channels. Frontier model weights are the product of investments worth hundreds of millions of dollars in compute, data, and labor, but once stolen, can easily be deployed by adversaries. At the same time, these are uniquely defensible assets because their terabyte-scale size makes them both impossible to memorize or verbally transmit (unlike algorithms or ideas) and more challenging to exfiltrate covertly.

Drawing on interviews with and input from 21 experts in cyber and hardware security, this report assesses the state of AI data center security and offers four recommendations for policymakers. To accelerate AI data center security, Western nations should:

  1. Develop an AI data center security standard. No security standard exists specifically for AI data centers despite their unique vulnerabilities. A standard should be developed in phases, beginning with a baseline of current best practices before advancing to levels sufficient to protect against sophisticated nation-state attackers. An AI data center security standard would enable governments to set procurement and export requirements while allowing companies to credibly signal security posture to investors, insurers, and customers.

  2. Fund and incentivize key R&D projects. Important defensive technologies against advanced nation-state threats remain underfunded. Governments can accelerate this technological development through a mixture of funding mechanisms, including Defense Advanced Research Projects Agency (DARPA)-style programs. The research should prioritize neglected but critical areas, including hardening AI chips against side-channel attacks, securing hardware supply chains, and preventing model weight exfiltration.

  3. Establish cyber incident and near-miss intelligence sharing between AI companies and governments. Most AI companies are not currently required to report incidents. OpenAI, for example, chose not to notify authorities after a significant 2023 breach, having judged the attacker to be acting alone, without any connection to a foreign government. Visibility into security incidents would enable governments to better understand the threat landscape and share declassified threat intelligence with companies. It could also incentivize companies to improve their security to avoid reputational damage.

  4. Identify key AI data center components that are now sourced from China, and shift those supply chains to more trusted locations. AI data centers are currently dependent on some components manufactured in China, which creates persistent supply chain attack vulnerabilities and constitutes a chokepoint that adversaries can exploit. Governments should comprehensively map these dependencies and then take steps to decouple.

In the United States, for example, this could involve:

  1. Having the Computer Security Division at the National Institute of Standards and Technology (NIST) develop an AI data center security standard in collaboration with the NIST Center for AI Standards and Innovation (CAISI), the Department of Defense (DOD), the Department of Homeland Security (DHS), the AI Security Center (AISC) at the National Security Agency (NSA), and industry partners

  2. Having the DOD direct DARPA to establish two new program manager positions for AI hardware security and physical supply chain security, and exploring other funding mechanisms, including Other Transaction (OT) authorities, Small Business Innovation Research (SBIR) grants, and challenge prizes

  3. Ensuring that the DHS covers relevant AI data centers under the forthcoming Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) rule, or otherwise establishing a voluntary intelligence sharing framework with the relevant companies, and requiring incident reporting through federal procurement requirements

  4. Having the Department of Commerce (DOC) map AI data center supply chains in collaboration with the DHS and the DOD, and discouraging the sourcing of these components from China through an Information and Communications Technology and Services (ICTS) rule and through export licenses or Validated End User (VEU) authorizations

The immediate and serious risks of inaction include the theft of highly valuable intellectual property and sabotage, both of which endanger Western technological leadership and national security.

End Notes

  1. The term “AI data center security” here is defined as security affected by decisions taken when planning, constructing, and operating AI data centers. As such, it does not discuss cybersecurity more broadly, excluding, for example, decisions made when developing or deploying AI models, or designing systems to use such models. Those decisions are also important, but not the focus of this report.

This report was coauthored by Erich Grunewald and Asher Brass Gershovich.

Next
Next

How AI Chips Are Made