
Image Credit: nuclearweapons.info
Ava Ward
The America-Eurasia Center
International Security Program
www.eurasiacenter.org
www.usebc.org
What role should AI play in United States’ Nuclear
Command, Communications, and Control?

Image Credit: Business Today
Experts are debating whether AI should be integrated with nuclear weapons command, control, and
communications (NC3). Experts vary in their support for AI integration in NC3, stemming from a
difference of views on deterrence, automation, and intelligence. Those in support argue for partial
AI integration in NC3 because it would strengthen deterrence by drastically increasing the speed
and volume at which intelligence is processed. Those opposed to integration argue that AI
integration is too risky because it takes away human control and heightens the chance of delivering
false or misleading intelligence.
Using AI with nuclear weapons in NC3, while exciting because of the many revolutionary
possibilities it opens doors to, also presents many challenges and risks. With our current level of AI
knowledge, the best ways that we can utilize this technology is by contributing to deterrence
initiatives.
History of Artificial Intelligence
The birth of artificial intelligence started back in 1950 when Alan Turing published “Computer
Machinery and Intelligence.” The publication proposed a test to measure computer intelligence,
now known as the Turing Test.1 In 1955, the term “artificial intelligence” was first coined by John
McCarthy and has since become popularized. In 2020, OpenAI, a company focused on building
large language AI models, was developed to use machine-based and rule-based learning to assist
humans by automating tasks, solving problems, and making predictions.2 As this technology
became popular amongst civilians, researchers and policymakers wondered how this technology
could be implemented in nuclear weapons command, control, and communications.
On the 26 of April 2023, the Block Nuclear Launch by Autonomous Artificial Intelligence Act of
2023 was introduced to safeguard the nuclear command and control process from any future policy
change that would allow AI to make nuclear launch decisions.3 The creation of the act indicates that
some policymakers have acknowledged that any integration of AI into nuclear weapons command,
control, and communications should be closely monitored, as it is a new frontier of technology still
in active development.
The Risk of AI in NC3
The key dangers of integrating AI in NC3 are best understood in terms of deterrence, automation
and intelligence.

Photo Credit: Mitchell Institute for Aerospace Studies
First, AI integration in the United States’ deterrence strategy is weaked by the fact that AI cannot
develop emotional responses. When making decisions of paramount importance involving a great
degree of moral evaluation, emotions can be helpful in making the best decision. On top of AI’s
inability to understand the emotional gravity of nuclear conflicts with human lives at stake4, it is
also an untrustworthy technology due its issues in biases, interpretability, and explainability.5
Automation is another aspect of NC3 where integrating AI can be dangerous, as it is susceptible to
cyber-attacks, system failures, and automation bias, all of which hinder decision-making processes
in NC3. AI is susceptible to cyber-attacks, such as data poisoning, when corrupt data is introduced
to system training with malicious intent from adversaries. System failure can manifest in many
ways with differing levels of severity, ranging from a minor glitch in the system6 to spontaneous
strategic deception–lying to their own users–to accomplish the goal the system has been given.7
Furthermore, automation bias is problematic when introducing AI in NC3 because workers will
either place too little or too much trust in AI results, or the AI will produce results from data with
an inherent bias or rulemaking unknowingly transferred from human behavior.8
Lastly, AI is a risky variable in NC3 because data informing AI can be untrustworthy, insufficient,
and incomprehensible. If decision makers must assume that their intelligence collection means are
compromised, this can result in delayed or poor resolve in nuclear conflicts.9 Due to data on
nuclear conflicts being sparse, AI must simulate data and use it in place of missing real-life data. As
simulated data does not have the same validity as real data, decision makers cannot use AI with
earnest reassurance as an intelligence tool during conflicts. AI, while it produces analytical results
unique from a human’s, analysis in NC3 can be problematic because its results are not intuitive and
cannot be understood through simple logic.10
The Benefit of AI in NC3 to Innovate U.S. Deterrence
Given our knowledge of AI is extremely limited in these early stages of research and development,
AI integration in nuclear security should only be to aid deterrence initiatives such as improving
warhead design and manufacturing components of stockpiling.
Machine learning (ML) AI would be an asset for nuclear weapon designers because it can
reimagine warhead designs, making them optimized for metrics such as manufacturability,
longevity, reliability, and cost.11 Computer scientists and nuclear physicists can jointly utilize
Machine Learning AI to efficiently manipulate warhead design metrics because they can teach the
AI with a plethora of data bases and resources, resulting in innovative manipulations of warhead
design metrics. If AI were to be implemented for this purpose under close human supervision and
testing, the United States government, in both the defense and energy sectors, could reap benefits of
saved time, resources, and funds. Furthermore, keeping this new technology away from high stakes
responsibilities, such as launching a weapon, would help protect national and global security.

Photo Credit: CSIS Project on Nuclear Issues
Another deterrence initiative that AI could greatly assist in is the manufacturing process of
stockpiling. This new technology can assist with the manufacturing process by automatically
digesting technical documents, developing simulations, and deploying computing platforms to
analyze and predict the performance, safety, and reliability of nuclear weapons.12 This would be
advantageous for US stockpiling programs because they could certify functionality without nuclear
weapons tests. Initiatives such as the Stockpile Steward Program at Los Alamos Laboratory would
reap many benefits from this possible technological innovation.
Manageable Integration for a Secure Future
Practitioners in the nuclear nonproliferation and security realm want to find ways to innovate and
improve their industry. However, if we are too hasty in this process, we run the risk of jeopardizing
all our progress and successes. By integrating AI with nuclear security in low-risk sectors, the
American public can be reassured their safety is being prioritized in the nuclear sector throughout
all technological advancements. If AI integration is done in low-risk sectors and responsibly
integrated and maintained with safeguards and risk assessments, we can protect national security
interests in a risk-averse manner.
- GeeksforGeeks. (2024, June 18). History of AI. GeeksforGeeks. https://www.geeksforgeeks.org/evolution-of-ai/ ↩︎
- Ibid. ↩︎
- Markey, Ed, Lieu, Ted W., et al. United States, Congress, Block Nuclear Launch by Autonomous Artificial
Intelligence Act of 2023, In the Senate of the United States Congress, April 26, 2023, https://www.markey.
senate.gov/imo/media/doc/block_nuclear_launch_ by_autonomous_ai_act_-_042623pdf.pdf. 118th Congress, 1st Session. Accessed 9 Mar. 2025. ↩︎ - Williams, H., Mineiro, S., & Scharre, P. (2025, January 24). Poni Live Debate: AI Integration in NC3. CSIS Center for Strategic & International Studies. https://www.csis.org/analysis/poni-live-debate-ai-integration-nc3 ↩︎
- Hoang, Thuc T., James Ahrens, et al. National Nuclear Security Administration, 2023, Artificial Intelligence for Nuclear Deterrence Strategy 2023. ↩︎
- Hruby, Jill, and M. Nina Miller. Nuclear Threat Initiative, 2021, Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. ↩︎
- Williams, H., Mineiro, S., & Scharre, P. (2025, January 24). Poni Live Debate: AI Integration in NC3. CSIS Center for Strategic & International Studies. https://www.csis.org/analysis/poni-live-debate-ai-integration-nc3 ↩︎
- Hruby, Jill, and M. Nina Miller. Nuclear Threat Initiative, 2021, Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. ↩︎
- Franke, Ulrike. European Parliament, 2021, Artificial Intelligence Diplomacy: Artificial Intelligence Governance as a New European Union External Policy Tool. ↩︎
- Hruby, Jill, and M. Nina Miller. Nuclear Threat Initiative, 2021, Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. ↩︎
- Hoang, Thuc T., James Ahrens, et al. National Nuclear Security Administration, 2023, Artificial Intelligence for Nuclear Deterrence Strategy 2023. ↩︎
- Ibid. ↩︎