2024 IAPS Fellows
IAPS is pleased to share the outcomes of our inaugural fellowship program, which concluded last October. This intensive three-month fellowship brought together twelve exceptional individuals from a range of backgrounds to tackle challenges at the intersection of artificial intelligence and public policy. Below, we detail each fellow's background, their key contributions during the program, and their next steps in shaping the future of AI policy.
Christian Chung
Christian is a policy professional and a Ph.D. student at Princeton University's School of Public and International Affairs, where his work examines how advanced AI impacts military strategy, strategic stability, and deterrence.
Drawing from over a decade of experience as a U.S. Government foreign affairs analyst, Christian spent his fellowship developing frameworks for securing federal AI procurement, building upon guidelines from the Office of Management and Budget, and exploring how detection mechanisms for international AI treaties may have geopolitical implications with Romeo. Christian plans to continue his work through the RAND TASP fellowship and his ongoing Ph.D. research in AI policy and government.
Christopher Covino
Chris is a public policy professional with experience in cybersecurity, infrastructure protection, risk management, and emergency management. He currently works as a contracted senior strategy and policy advisor at the Cybersecurity and Infrastructure Security Agency (CISA). Previously, Chris served as Policy Director for Cybersecurity and Infrastructure Security in the City of Los Angeles Mayor's Office.
During the fellowship, Chris developed recommendations for the Department of Homeland Security Artificial Intelligence Safety and Security Board. He also developed recommendations for NIST AI-800-1 and for the Secure AI Act, and published an article on self-regulatory models for AI inspired by the electricity industry. Chris will continue his work at CISA while taking on a part-time advisory role at IAPS.
Sumaya Nur Adan
Currently a Senior AI Risk advisor for the UK’s Department for Science, Innovation and Technology, Sumaya has experience examining equitable benefit-sharing and risk mitigation strategies in global AI governance institutions. She has contributed to projects with organizations like the African Commission and the UN Office for Disaster Risk Reduction.
In her fellowship with IAPS, Sumaya worked on a project on the international network of AI Safety Institutes and its goals, format, and challenges.
Looking forward, she plans to continue her work as a research affiliated with the Oxford Martin AI Governance Initiative, and keep collaborating with IAPS on a project for building a knowledge-sharing platform between governments on AI safety policy commissioned by the International Telecommunications Union.
Romeo Dean
As a final-year concurrent Master's student in Computer Science at Harvard, Romeo’s fellowship project focused on exploring how detection mechanisms for international AI treaties may have geopolitical implications with Christian. Specifically, his research examined the extent to which world powers can monitor the progress of each other's AI development. Following the completion of his Master's program, Romeo will work for AI Futures Project, a non-profit started by ex-OpenAI researcher Daniel Kokotajlo.
Rida Fayyaz
Rida is a policy professional with an engineering background, bringing years of technical project management experience from companies like Tesla and Apple. During the fellowship, Rida researched AI agents, including creating a taxonomy of AI agent safeguards.
Since the fellowship, Rida has started a role as a Senior Technical Program Manager in Responsible AI at Microsoft.
Vinay Hiremath
Previously a software engineer with experience in ML engineering, Vinay developed a framework for comprehensive, verifiable, and adaptable auditing of AI usage. His work addressed limitations in current monitoring practices while considering user privacy and external verification needs.
Vinay has accepted a position at the Center for AI Governance (GovAI) in London, where he will focus on international policy and standards work under Robert Trager's supervision.
Amin Oueslati
Previously a government digital strategies consultant at McKinsey & Company, Amin’s work focuses on European AI governance. He has conducted research projects with the University of Oxford, the Weizenbaum Institute, and the Bertelsmann Foundation to inform policymakers in Germany and other parts of the EU.
Amin’s fellowship project centered on the EU Code of Practice. He led The Future Society’s contribution on risk identification & assessment (WG2) and risk mitigation (WG3).
Since the fellowship, Amin has now joined The Future Society as an Associate in Berlin, where he will represent TFS in Working Groups 2 and 3 of the Code of Practice drafting process and develop policy briefs on third-party involvement in model evaluations.
Ulysse Richard
Ulysse is a Consultant with the Science, Technology and International Security Unit at the UN Office for Disarmament Affairs (UNODA), focusing on AI governance in the military domain. He is also a Master’s Candidate in International Security and International Relations at Sciences Po and Peking University.
For the fellowship, Ulysse mapped out what kinds of confidence building measures could realistically be agreed upon and implemented by nation states to reduce unpredictability in AI development and deployment.
Ulysee will continue working with UNODA and completing his Master’s thesis.
Jonathan Schmidt
Bringing experience from his Master’s degree on Cognitive Sciences and previous research at the Center for Democracy and Technology, Jonathan's fellowship work centered on researching analyzing the EU Codes of Practice drafts at The Future Society, with particular focus on post-market monitoring obligations.
Since the IAPS fellowship, Jonathan has joined TFS's EU team in Brussels as an Analyst, where he will research criteria for classifying systemic risk in general-purpose AI models and assist with other Codes of Practice-related work.
Tereza Zoumpalova
Drawing from her Master's degree in International Economic Policy from Sciences Po Paris and experience interning at the European Parliament and the OECD, during the fellowship Tereza led the public consultation process for civil society engagement in the run-up for the Paris AI Action Summit under the supervision of Caroline Jeanmaire at The Future Society.
After the fellowship, Tereza joined The Future Society as an Associate in Paris working on projects related to the AI Action Summit and engagement with the French AI governance ecosystem.
Jamie Bernardi
Prior to the program, Jamie co-founded BlueDot Impact, an organization providing introductory courses on AI technology and governance to individuals looking to learn about or switch into AI relevant careers, and did research at the Center for the Governance of AI (GovAI) as a seasonal fellow.
During the fellowship, Jamie explored innovative approaches to cyber defense. His work focused on early-access agreements between AI developers and defense agencies, proposing practical frameworks for implementation. Moving forward, Jamie plans to pursue a role in UK AI policy.
Tao Burga
Tao is a recent BA graduate from Brown University where he investigated the risks of AI integration into nuclear command and control systems through a comparative study of human and large language model behavior.
Through the fellowship, Tao further investigated on-chip mechanisms for governance and the AI-nuclear analogy. He co-wrote a paper with the Center for a New American Security (CNAS) exploring the value of on-chip mechanisms to US chip firms. He also wrote a literature review of the AI-nuclear analogy and implications for compute governance
After the fellowship, Tao joined the Institute for Progress as a Non-Resident Fellow, will work at New York University on AI governance and digital minds, and will partner with CNAS for a project on international compute governance.
IAPS is proud of the work our fellows have completed. During their time with IAPS, each fellow contributed thoughtful analysis and research to ongoing policy discussions. We look forward to seeing how their work develops as they move forward in their careers and continue engaging with critical questions in AI governance.
For those interested in joining future cohorts, you can sign up to our newsletter to be informed when applications open in 2025—we welcome applications from individuals with diverse backgrounds in technology and policy who share our commitment to effective AI governance of advanced AI models.