2025 AI Policy Fellows
-

Raghav Akula
2025 FELLOW
Raghav Akula is a student at Georgetown University’s School of Foreign Service majoring in Science, Technology, and International Affairs. He has contributed to projects for ODNI, MITRE on economic statecraft and critical minerals, an AI startup focused on geoeconomic strategy and prediction, a national security accelerator analyzing Chinese military AI, and a global governance think tank.
At IAPS, he worked on one project examining high-bandwidth memory production and export controls, and a second project differentiating the objectives of various types of "AI races."
-

Rafael Andersson Lipcsey
2025 FELLOW
Rafael is a political scientist and economist with a career spanning central banking, diplomacy, research, and policy work. His work has focused on the governance challenges posed by AI diffusion, including contributing to the drafting of EU codes of practice for frontier AI models.
Over the course of the Fellowship at IAPS, Rafael researched the advantages, limitations and risks of a pan-European distributed AI strategy, as well as contributed to developing testing and evaluation measures for military AI systems.
-

Bruna Avellar
2025 FELLOW
Bruna is an international lawyer with an LL.M. in Public International Law from Leiden University. As an IAPS Fellow, she worked with RAND on a project mapping verification mechanisms for U.S. export controls on advanced AI chips. Previously, as an AI Governance Fellow at Pivotal Research, she conducted research on semiconductor export-control evasion pathways and Brazil’s emerging role in global AI governance.
Before moving into AI governance, Bruna worked on sovereign litigation at an international law firm and held roles at the United Nations in New York and Geneva.
-

Dave Banerjee
2025 FELLOW
Dave Banerjee is a research associate at IAPS and a research manager at ERA. His research interests include securing frontier AI systems and preventing AI-driven power centralization. During the Fellowship, he wrote a report on AI integrity--ensuring that AI systems are not tampered with, backdoored, or have secret loyalties.
-

Josh Brause
2025 FELLOW
Josh Brause is a U.S. Government Deployment Strategist at Palantir Technologies. His previous experience includes serving as a Visiting Fellow at Taiwan’s Institute for National Defense and Security Research and co-founding PLATracker, an open-source platform providing analysis of security dynamics in East Asia. A graduate of Colby College, Josh has worked across the U.S., Israel, and Taiwan at the nexus of emerging technologies and foreign policy.
As an IAPS Fellow, Josh researched how states build "AI sovereignty", the energy, compute, chips, data, and model assets that let them develop and deploy frontier AI on their own terms. -

Su Zeynep Cizem
2025 FELLOW
Su Zeynep Cizem is an AI governance analyst focusing on global coordination and safety frameworks for advanced AI systems. During her IAPS Fellowship, she was placed at The Future Society, where she co-led and launched the Global Call for AI Red Lines at the 80th UN General Assembly.
Her Fellowship work focused on advancing international commitments to prevent unacceptable AI risks and supporting multilateral processes on incident reporting and frontier-AI governance.
-

Andrea Fiegl
2025 SENIOR FELLOW
Andrea Fiegl brings 15+ years of cross-sector experience to questions of technology and governance. She has directed U.S. Government investments of more than $250M at USAID and advanced bipartisan foreign policy on the Senate Foreign Relations Committee. Her current policy work focuses on emerging technologies and democratic resilience, with a specific focus on AI governance. She has held fellowships with the National Endowment for Democracy, the Wilson Center, and IAPS. Trained in ethics and political philosophy, Andrea combines analytical rigor with practical expertise.
-

Brendan Halstead
2025 FELLOW
Brendan Halstead is a MATS scholar with the AI Futures Project. He developed a growth model of AI capabilities progress to inform compute governance and international coordination proposals. His other work aims to determine conditions under which states might rationally pursue increasingly powerful AI for decisive military advantage, despite escalation or loss-of-control risks.
-

Rebecca Hawkins
2025 FELLOW
Rebecca Hawkins brings cross-sector experience to AI governance challenges. She developed systems to help AI security researchers in the non-profit sector deliver quality work and receive $10M/year in philanthropic funding, consulted for Fortune 500 companies at Quantium on data-driven strategy, and published policy research with the Oxford Martin School.
-

Sven Herrmann
2025 SENIOR FELLOW
Sven Herrmann brings in experience from a variety of sectors. Most recently, he worked as Head of Research Operations at a research institute at Oxford University. He also worked at a non-profit in the sustainability sector and as a management consultant. He has a PhD in mathematics.
During his Fellowship, Sven worked in the European AI Governance team at The Future Society with a focus on developing policies to prevent serious AI incidents by learning from other industries. He also worked on a couple of internal projects at IAPS, including on digital minds governance.
-

Craig Jolley
2025 SENIOR FELLOW
Craig Jolley works at the intersection of AI policy and international development, currently at the World Bank and previously at USAID. Before coming to the U.S. government, Craig worked as a researcher in physics and computational systems biology, and is also interested in the potential and limitations of AI-for-science and the role of AI in biodefense and biosecurity.
As a Fellow, Craig worked with IAPS on developing a proposal for a U.S.-China AI incident notification mechanism to mitigate risks from loss-of-control, model weight theft, and other AI incidents of international concern.
-

Natasha Karner
2025 FELLOW
Natasha Karner is a Research Associate at The Alan Turing Institute in the AI for Data-Driven Advantage (AIDA) defence policy workstream. Previously, Natasha was a scholar in the OSCE and UNODA Arms Control Programme. She is a Junior Associate Fellow at NATO Defense College and a member of BASIC’s Emerging Voices Network for nuclear weapons policy.
During her IAPS Fellowship, she worked on comparing UK and NATO Defence AI adoption policy as well as TEVV for frontier AI in military applications.
-

Maria Kostylew
2025 FELLOW
Maria Kostylew is a research manager at the ML Alignment & Theory Scholars (MATS) Program, where she is responsible for a mix of governance and evaluations streams and a Ph.D. student at the University of Oxford where she focuses on digital authoritarianism. She previously worked at the Centre for European Policy Studies, focusing on EU AI governance and digital democracy.
-

Jackson Lopez
2025 FELLOW
Jackson specializes in U.S.-China AI competition and governance. During his Fellowship, he worked at Oxford University's AI Governance Initiative on AI-enabled Authoritarianism and Programming for the AI Impact Summit.
He has been awarded Fellowships for the Hudson Institute’s Political Studies Program and the Hertog Foundation’s Security Studies Program. His writing on foreign policy has appeared in the Washington Examiner and the Lowy Institute’s Interpreter.
-

Hamish Low
2025 FELLOW
Hamish Low was previously a Summer Fellow at GovAI researching UK AI sovereignty and compute strategy and worked as a Research Analyst in London covering tech and telecoms. He holds an MA in International Political Economy from King's College London, where his research focused on China’s industrial policy in the semiconductor industry and the geopolitics of compute. He also has an MA in History & Politics from the University of Oxford.
During the Fellowship, Hamish worked on understanding the geopolitical relevance of Russia in a world of powerful AI with the UK AI Security Institute.
-

Yohan Mathew
2025 FELLOW
Yohan is an experienced technologist with a background in AI safety/governance research, ML/software engineering and civic tech. His research areas over the last two years include AI model evaluations, societal impacts of AI, and agent governance. He’s also an AI safety advisor to Tattle, an Indian civic tech organization, and was on the board of a policy advocacy nonprofit in Canada.
During the IAPS Fellowship, he worked on a report on highly autonomous cyber-capable agents (HACCAs).
-

Conor McGlynn
2025 FELLOW
Conor McGlynn is a PhD student in Public Policy at Harvard University. His IAPS Fellowship project focused on international technology competition. He was a 2020-2021 Schwarzman Fellow at the Kissinger Institute on US-China Relations in Washington D.C. and a 2019-2020 Schwarzman Scholar at Tsinghua University in Beijing. He holds degrees in philosophy and economics from the University of Cambridge and Trinity College Dublin.
-

Oliver Ritchie
2025 SENIOR FELLOW
Oliver Ritchie is a UK policy expert focussed on safe, responsible growth from AI. Before IAPS, he worked with GovAI on frontier AI regulation and government engagement. His background is in the UK civil service, where he led the team advising the Chancellor on Covid-19 developments and advised on topics including tax reform and international negotiations. He has also led projects for a foundation supporting academics at Oxford.
At IAPS, he wrote a paper on the fiscal implications of advanced AI and laid the groundwork for a new UK AI strategy project that he hopes to launch soon.
-

Brianna Rosen
2025 SENIOR FELLOW
Dr. Brianna Rosen is Executive Director of the Oxford Programme for Cyber and Technology Policy at the Blavatnik School of Government, University of Oxford, and a Senior Fellow at Just Security. She leads research on securing AI systems, global security, and the governance of emerging technologies. She previously served for over a decade in the U.S. government, including at the White House National Security Council and Office of the Vice President.
-

Aditya Singh
2025 FELLOW
Aditya Singh is a PhD Candidate at The George Washington University and is a Fellow of the "Co-Design of Trustworthy AI Systems" program and an Affiliate of the National Science Foundation "Institute for Trustworthy AI in Law & Society".
For his IAPS Fellowship, Aditya worked on a report about Highly Autonomous Cyber Capable Agents (HACCAs), focusing on how agents might seek to gain financial and compute resources. Additionally, he helped lead a project outlining key issues in the test & evaluation of AI systems for the military, providing specific recommendations to address the issue.
-

Zaina Siyed
2025 FELLOW
Zaina Siyed is a UC Berkeley graduate working in security governance for frontier technologies. Zaina’s past experience is in public interest cybersecurity work for at-risk civil society organizations at UC Berkeley’s Center for Long Term Cybersecurity and Security Governance, Risk & Compliance at a printed circuit board software company. Zaina currently works on Security Governance, Risk & Compliance at OpenAI.
-

Niel Swanepoel
2025 FELLOW
Niel Swanepoel is an AI policy analyst specializing in compute governance and U.S.–China relations. During his IAPS Fellowship, he worked developed an interactive dashboard mapping the frontier AI supply chain and working on industrial policy advocacy projects.
Niel is concurrently completing his Master’s in Foreign Service candidate at Georgetown and was a summer research assistant at Harvard Law’s Berkman Klein Center, studying frameworks for AI agents and AI interpretability. Previously, he also interned at CSET, tracking US-China AI legislation.
-

Matt Smith
2025 FELLOW
Matt Smith spent five years as a Senior PM in Microsoft’s Security Division, where he developed cybersecurity strategies for sovereign/national cloud environments and launched an employee group for AI safety advocacy. Matt holds a PhD from Nobel laureate David Baker’s lab where he worked on computational design and experimental characterization of novel enzymes. His current work at the Johns Hopkins Center for Health Security focuses on the intersection of artificial intelligence and biosecurity, identifying potential risks, opportunities, and mitigation strategies.
-

Lara Thurnherr
2025 FELLOW
Lara Thurnherr is an AI governance researcher focused on the intersection of geopolitics, information security, and AI. During the Fellowship, she researched the implications of a scenario where capabilities continue to advance and become harder to develop on the relevance of frontier AI company security as a geopolitical asset, as well as a research sprint on theories of change for the 2027 AI Summit.
Previously, she did a Master’s in Cyber Strategy and Policy, was a visiting fellow at the Centre for the Governance of AI, and a research affiliate at the Royal United Service Institute.
-

Joshua Turner
2025 FELLOW
Joshua Turner holds a BA in Computer Science and Mathematics from Grinnell College. He recently interned at CSIS, where he supported research on AI and national security while conducting an independent study on Chinese compute. Before CSIS, he worked at Brookings' Center for Technology Innovation, where he published on California AI regulation and US AI policy.
During the IAPS Fellowship, he worked with the AI Futures Project on blog posts about scenario planning for AI policy, the AI verification technology ecosystem, and forecasting Chinese lithography progress.
-

Steven Veld
2025 FELLOW
Steven Veld studies computer science and math at the University of California, Los Angeles, where he has a background in machine learning research. His past work spans AI biosecurity, international compute governance, AI forecasting, and chain-of-thought monitoring in large language models.
During the IAPS Fellowship, he developed recommended revisions for the AI Risk Evaluation Act and a staffer-level educational briefing on evaluation awareness in frontier AI systems.
-

Mac Walker
2025 FELLOW
Mac Walker recently completed an MPhil in Biotechnology at the University of Cambridge, where he researched biological foundation models for drug discovery at the Milner Therapeutics Institute. He served on the Cambridge AI Safety Hub committee and has conducted technical AI safety research on model interpretability.