Helping the AI Industry Secure Unreleased Models is a National Security Priority
While attention focuses on publicly available models like ChatGPT, the real risk to U.S. national interests is the theft of unreleased “internal models.” To preserve America’s technological edge, the U.S. government must work with AI developers to secure these internal models.
Mapping Technical Safety Research at AI Companies
This report analyzes the research published by Anthropic, Google DeepMind, and OpenAI about safe AI development, as well as corporate incentives to research different areas. This research reveals where corporate attention is concentrated and where there are potential gaps.