The Petrov Window

Three Systems Are Converging Toward a Nuclear War That Starts by Accident and Ends Before Anyone Decides to Fight It

On February 5, 2026, the New Strategic Arms Reduction Treaty expired. For the first time since 1972, no legally binding agreement constrains the nuclear arsenals of the United States and Russia. No on-site inspections. No data exchanges. No notifications about missile tests, weapons movements, or changes to deployed forces. No legal commitment not to interfere with each other’s satellites and ground-based early warning systems. The treaty that required eighteen verification visits per year died quietly, and nobody replaced it with anything.

Six weeks earlier, in December 2025, the Trump administration signed Executive Order 14367 designating fentanyl and its precursor chemicals as Weapons of Mass Destruction. That designation activated authorities designed to stop the proliferation of nuclear, chemical, and biological weapons. The world noticed the cartel implications. Almost nobody noticed the precedent: the WMD designation framework, built over decades to prevent catastrophic weapons from crossing borders, was now being applied to a drug. Meanwhile, the actual weapons of mass destruction, the 10,636 nuclear warheads held by the United States and Russia, lost their last legal guardrails on the same calendar.

This is a paper about what happens when three systems fail at the same time, and the institutions monitoring each system cannot see the other two.

The First System: Verification Dies

New START was not primarily about warhead limits. It was about transparency. The 1,550-warhead cap mattered less than the mechanism that allowed each side to know what the other side had, where it was, and what it was doing. The verification regime provided both sides with insights into the other’s nuclear forces and posture. On-site inspectors could walk into missile bases with seventy-two hours’ notice. Satellites operated under a mutual commitment not to blind or jam each other. Data exchanges twice a year confirmed the number and location of delivery systems. This architecture did not prevent nuclear war through idealism. It prevented nuclear war through information. When you know what the other side has, you do not need to assume the worst. When you cannot see, you must.

The verification mechanism was already dying before the treaty expired. On-site inspections halted in March 2020 during COVID-19 and never restarted. In February 2023, Putin suspended Russia’s participation entirely, rejecting inspections and data exchanges. The United States responded by withholding its own data. By the time the treaty formally died on February 5, 2026, it had been a zombie for three years: legally alive, operationally hollow. The Lowy Institute assessed that the loss of transparency is the most immediate consequence, because verification regimes allowed each side to distinguish between routine activities and destabilizing preparations. Without that distinction, every movement is ambiguous. Every ambiguity is a potential trigger.

Russia holds an estimated 5,459 nuclear warheads. The United States holds 5,177. Both retain the technical capacity to rapidly expand deployed arsenals by uploading additional warheads onto existing delivery systems. The Federation of American Scientists estimates that the United States could add 400 to 500 warheads to its submarine force alone by uploading to maximum capacity. Neither side has announced expansion. Neither side has committed not to expand. Neither side can verify what the other is doing. This is the environment into which the second system is being deployed.

The Second System: The Machine Accelerates

General Anthony Cotton, commander of U.S. Strategic Command, told the Senate Armed Services Committee in March 2025 that STRATCOM will use AI to enable and accelerate human decision-making in nuclear command, control, and communications. He said AI will remain subordinate to human authority. He said there will always be a human in the loop. He referenced the 1983 film WarGames and assured the audience that STRATCOM does not have, and will never have, a WOPR. The audience laughed.

What Cotton described is not a machine that launches missiles. It is a machine that processes sensor data, identifies threats, generates options, and presents recommendations to a president who has, at best, tens of minutes to decide whether an incoming nuclear strike is real. The NC3 architecture is a complex system of systems with over 200 components, including ground-based phased array radars, overhead persistent infrared satellites, the Advanced Extremely High Frequency communication system, and airborne command posts. AI is being integrated into the early-warning sensors, the intelligence processing pipelines, and the decision-support tools that feed the president’s options screen. The machine does not press the button. It builds the world in which the button gets pressed.

The Arms Control Association published the most comprehensive assessment of this integration in September 2025. Its conclusion deserves to be read by everyone with a security clearance and most people without one: the risks to strategic stability from significantly accelerating nuclear decision timelines or reducing human involvement in launch decisions are likely to outweigh the potential benefits. The reason is not that AI will malfunction. The reason is that AI will function exactly as designed, processing data faster than a human can evaluate it, generating recommendations with the confidence of a system that does not experience doubt, and compressing the decision window from minutes to seconds in an environment where the data itself may be degraded, spoofed, or incomplete.

The entire history of nuclear near-misses was survived because humans took time to doubt. In 1983, Soviet Lieutenant Colonel Stanislav Petrov watched his early warning system report five incoming American ICBMs. The system was functioning as designed. The data was wrong. Petrov doubted it. He reported a malfunction rather than an attack. He was right. The sun had reflected off high-altitude clouds above a North Dakota missile field and triggered the satellite sensors. In the same year, NATO’s Able Archer 83 exercise was misinterpreted by Soviet intelligence as preparation for a genuine first strike. The Soviets moved nuclear forces to higher alert. The crisis dissipated because humans on both sides took hours to assess the ambiguity. In 1995, Russian early warning operators detected a Norwegian scientific rocket and initially classified it as a potential submarine-launched ballistic missile. President Yeltsin activated the nuclear briefcase. He did not launch because he took four minutes to wait for additional data. Four minutes. That was the margin between a scientific experiment and a nuclear exchange.

AI is designed to eliminate those four minutes. It is designed to process the sensor data that Petrov doubted, generate the threat assessment that Able Archer confused, and compress the decision timeline that Yeltsin stretched. Every one of these near-misses was caused by sensor data that looked real and was not. AI does not solve the problem of bad data. It accelerates the consequences of it.

The Third System: The Eyes Go Dark

In September 2025, the United States accused Russia of launching a satellite that was likely a space weapon. The head of UK Space Command warned of Russian jamming attacks on British space assets. China has demonstrated anti-satellite capabilities in multiple tests. The United States itself tested an ASAT weapon in 2008 and has invested billions in space domain awareness and counterspace programs. Trump’s Golden Dome initiative envisions a multi-layered, space-based missile defense system that would, by definition, require the ability to operate in contested space.

The early warning satellites that detect missile launches are the eyes of the nuclear command system. They are the first sensor in the chain that ends at the president’s decision desk. When New START was in force, both sides committed not to interfere with each other’s national technical means, the satellites, radars, and ground systems that provide warning. That commitment expired with the treaty. The Council on Foreign Relations noted that the treaty’s absence will be felt within intelligence communities because the limits and the commitments not to interfere with national technical means gave both sides confidence that the other was not attacking the ground and space-based systems that provide early warning of attack.

Without that commitment, the early warning architecture becomes a target. Not necessarily a target for destruction, not yet, but a target for degradation: jamming, spoofing, dazzling laser attacks against optical sensors, cyber intrusion into ground stations, electronic warfare against the data links that connect satellites to command centers. The satellite does not need to be destroyed. It needs to be confused. A sensor that reports ambiguous data in a compressed decision timeline, processed by an AI system optimized to reduce ambiguity to binary outputs, is more dangerous than a sensor that has been destroyed. A destroyed sensor produces silence. A confused sensor produces noise that looks like signal.

The Convergence

Each of these three systems, taken independently, represents a manageable risk. Arms control experts can model the consequences of verification loss. AI safety researchers can identify the failure modes of automated decision-support. Space security analysts can map the anti-satellite threat landscape. The problem is that none of them are operating independently. They are converging into a single compound system in which the failure of any one component cascades through the other two.

The convergence model works like this. Verification dies, and neither side can distinguish routine military activity from preparation for a strike. Both sides default to worst-case planning. AI is integrated into early warning and decision-support to manage the overwhelming volume of ambiguous data, compressing the timeline between detection and recommendation. Space weapons develop the capability to degrade the sensors that feed the AI system, introducing corrupted or incomplete data into a pipeline designed to accelerate decisions based on that data. The result is a system optimized for speed operating on degraded inputs in an environment of maximum uncertainty, with a human decision-maker who has less time, less information, and less ability to doubt than any president since the invention of the atomic bomb.

This is not a scenario. It is the current state of the world as of March 2026. The verification regime is dead. AI integration into NC3 is underway. Counterspace capabilities are operational. The three conditions are not sequential. They are concurrent. And the institutions responsible for monitoring each condition are architecturally separated from the institutions monitoring the other two.

The arms control community, centered at the Arms Control Association, the Nuclear Threat Initiative, and the Bulletin of the Atomic Scientists, tracks verification and treaty compliance. Its expertise is in warhead counts, delivery systems, and diplomatic frameworks. It does not have deep technical literacy in AI system architecture or space domain operations. The AI safety community, centered at organizations like the Federation of American Scientists and academic institutions, analyzes machine learning failure modes, automation bias, and human-machine interaction. It does not have operational access to NC3 system design or counterspace intelligence. The space security community, spread across Space Force, CSIS, and the Secure World Foundation, monitors orbital threats and ASAT development. It does not participate in NPT Review Conferences or nuclear posture reviews. Three communities of expertise, three institutional architectures, three separate warning systems, and a single convergent threat that lives in the gap between all three.

The Petrov Window

There is a term for the margin that saved the world in 1983, in 1995, and at every other near-miss in the nuclear age. Call it the Petrov Window: the interval between the moment a system reports an incoming threat and the moment a human being decides whether to believe it. Every nuclear near-miss in history was survived because the Petrov Window was wide enough for doubt. Wide enough for a lieutenant colonel to override his instruments. Wide enough for a president to wait four minutes. Wide enough for intelligence officers to question whether an exercise was really an attack.

The three converging systems are closing the Petrov Window from both sides simultaneously. AI compresses the decision timeline from the top, accelerating the path from detection to recommendation. Sensor degradation corrupts the data from the bottom, reducing the quality of information available within the compressed window. And verification collapse removes the baseline context that would allow a human to distinguish signal from noise, because without transparency, there is no normal against which to measure the abnormal.

When the Petrov Window closes to zero, the system reaches a state in which a nuclear exchange can initiate and escalate before any human being decides to fight. This is not a failure of technology. It is not a failure of policy. It is the emergent property of three rational decisions, each made by competent professionals for defensible reasons, converging in a space that none of them can see because their institutions were not designed to look there.

Forcing the Window Open

The doctrine begins with a single recognition: the Petrov Window is a strategic asset more valuable than any weapons system in any nation’s arsenal. The four minutes that Yeltsin took in 1995 were worth more than every nuclear warhead on every submarine in every ocean. The doubt that Petrov exercised in 1983 outperformed every missile defense system ever designed. The margin for human judgment in a nuclear decision is not a weakness to be engineered away. It is the only thing that has kept the species alive since 1945.

Pillar One: Verification Restoration. The United States and Russia should immediately establish a mutual commitment to continue observing New START’s transparency provisions, including data exchanges and notifications, without requiring a new treaty. Putin proposed exactly this in September 2025, offering to observe limits for one year. The United States never formally responded. Respond. The verification mechanism is more important than the warhead limit. A world with 2,000 deployed warheads and functioning inspections is safer than a world with 1,550 deployed warheads and no visibility into what the other side is doing.

Pillar Two: AI Decision-Time Floor. Establish an international minimum decision-time standard for nuclear command systems. No AI-assisted or AI-augmented NC3 system should compress the interval between threat detection and presidential decision below a defined floor. Call it the Petrov Standard: no system may reduce the human decision window below the time required for a competent decision-maker to receive, question, verify through independent channels, and act on early-warning data. This is not an arms control treaty. It is a technical safety standard, analogous to the engineering margins built into nuclear reactor design. It should be pursued bilaterally with Russia and multilaterally through the NPT Review Conference beginning in April 2026.

Pillar Three: Sensor Sanctuary. Declare early warning satellites and their ground stations protected assets under an explicit, legally binding no-attack commitment separate from any broader arms control framework. The early warning architecture is not a military advantage for either side. It is a shared infrastructure of stability. An attack on early warning systems does not give the attacker an advantage. It gives everyone less time to avoid extinction. The commitment not to interfere with national technical means should not have expired with New START. It should be extracted, codified independently, and extended to all nuclear-armed states.

Pillar Four: Convergence Integration. Create a single institutional mechanism, whether a joint commission, a cross-domain intelligence cell, or a designated interagency office, that monitors the three converging systems simultaneously. The arms control community, the AI safety community, and the space security community must be architecturally connected so that the compound risk is visible to a single analytical authority. The Bulletin of the Atomic Scientists moved the Doomsday Clock to 89 seconds to midnight in January 2026. The clock measures perception. What is needed is an instrument that measures the actual convergence state: the width of the Petrov Window at any given moment, computed from the current status of verification, AI integration, and sensor integrity across all nuclear-armed states.

Pillar Five: The Red Line That Matters. Every nuclear-armed state should declare, publicly and unambiguously, that no artificial intelligence system will be granted launch authority under any circumstance, including system failure, communication breakdown, or decapitation of national command authority. General Cotton says this is already the policy. Make it a binding commitment. Make it verifiable. Make it the one thing that all nuclear-armed states agree on, because it is the one thing on which the survival of the species depends. The Petrov Window must remain open. The machine must never be permitted to close it.

The Doomsday Clock reads 89 seconds. The number is symbolic. The convergence is not. Three systems are failing simultaneously, each tracked by a separate community of experts that cannot see the other two. The verification architecture that provided transparency is dead. The AI architecture that compresses decisions is being born. The space architecture that blinds sensors is being tested. Where these three systems meet, there is a window through which human judgment passes on its way to a nuclear decision. That window is closing. It has no name. It has no institutional owner. Nobody is measuring its width. When it reaches zero, the question of whether to fight a nuclear war will be answered before anyone asks it. This is the convergence gap. It is the only one that ends everything.

Devil’s Advocate: The Hidden Hand

A reasonable person reads this paper and asks the obvious question: if the convergence is this visible, if the academic literature is this clear, if the institutional separation is this documented, why does no one act? The answer is not negligence. It is arithmetic.

The United States is in the early years of a nuclear modernization program estimated at $1.7 trillion over thirty years. The Sentinel ICBM. The Columbia-class submarine. The B-21 Raider bomber. The Long-Range Standoff Weapon. And threading through all of it, the NC3 modernization that General Cotton describes as essential. Lockheed Martin, Northrop Grumman, General Dynamics, Raytheon, and Boeing hold the prime contracts. Their combined lobbying expenditure in the defense sector exceeds $100 million annually. These companies do not benefit from arms control. They benefit from its absence. Every expired treaty is an uncapped market. Every closed Petrov Window is a faster procurement cycle for the AI systems designed to operate within it.

The intelligence community benefits from opacity. When New START was in force, on-site inspections and data exchanges provided verified information about Russian nuclear forces that supplemented national intelligence collection. Without the treaty, national technical means become the sole source of information. That is not a problem for the intelligence community. It is a promotion. The agencies that collect signals intelligence, imagery intelligence, and measurement and signature intelligence become more important, not less, when verification regimes collapse. Their budgets expand. Their authorities expand. Their centrality to presidential decision-making expands. The death of arms control is the intelligence community’s full-employment act.

The counterspace industry is the newest beneficiary. Trump’s Golden Dome initiative, the militarization of low Earth orbit, the development of ASAT capabilities, the hardening of satellite constellations against attack: all of it generates contracts, programs, and career paths that did not exist a decade ago. Space Force itself is a bureaucratic institution whose survival depends on the continued perception that space is contested. If early warning satellites were declared sanctuary assets under international law, as this paper proposes, the counterspace mission set would shrink. Programs would be cancelled. Careers would end. Budgets would contract.

And then there is the quietest incentive of all. OpenAI has partnered with the three NNSA national laboratories, Los Alamos, Lawrence Livermore, and Sandia, for classified work on nuclear scenarios. Anthropic launched a classified collaboration with NNSA and DOE to evaluate AI models in the nuclear domain. The technology companies building the AI systems that will compress the Petrov Window are simultaneously building the business relationships that make their participation in NC3 modernization permanent. This is not conspiracy. It is the ordinary operation of institutional incentives in which every actor pursues a rational objective and the compound result is a system optimized for catastrophe.

The Petrov Window closes because no one with the power to keep it open has a financial interest in doing so. The arms control negotiators who built the verification architecture were State Department diplomats with no procurement authority and shrinking budgets. The Federation of American Scientists published the upload analysis. The Arms Control Association published the AI risk assessment. The Nuclear Threat Initiative published the transparency warning. None of them hold a single contract. None of them sit on a single procurement board. The people who see the convergence have no power. The people who have power cannot see it, or will not, because seeing it clearly would require them to act against the institutions that pay them.

Eisenhower warned about this in 1961 when he named the military-industrial complex. He did not live to see the nuclear-AI-space complex, but the structure is identical. A network of institutions, contractors, and career incentives that derive revenue and relevance from the perpetuation of threat, and that will resist, passively or actively, any doctrine that reduces the threat they exist to manage. The Petrov Window is not closing because of Russian aggression or Chinese expansion or technological inevitability. It is closing because keeping it open is not profitable.

Resonance

Arms Control Association. (2025). “Artificial Intelligence and Nuclear Command and Control: It’s Even More Complicated Than You Think.” Arms Control Today. https://www.armscontrol.org/act/2025-09/features/artificial-intelligence-and-nuclear-command-and-control-its-even-moreSummary: Comprehensive assessment of AI integration into NC2/NC3 systems, concluding that risks to strategic stability from accelerating decision timelines outweigh potential benefits, with particular concern about cascading effects and emergent behaviors.

Belfer Center for Science and International Affairs. (2026). “New START Expires: What Happens Next?” Harvard Kennedy School. https://www.belfercenter.org/quick-take/new-start-expires-what-happens-nextSummary: Expert analysis warning that without New START’s bridge, near-term nuclear transparency hopes will fade and incentives to expand arsenals will rise, with consequences reverberating beyond Washington and Moscow.

Carnegie Corporation of New York. (2025). “How Are Modern Technologies Affecting Nuclear Risks?” Carnegie Corporation. https://www.carnegie.org/our-work/article/how-are-modern-technologies-affecting-nuclear-risks/.Summary: Documents General Cotton’s testimony on AI integration into nuclear C2 and identifies the widespread lack of interdisciplinary literacy among nuclear and AI experts as a critical vulnerability.

Chatham House. (2025). “Global Security Continued to Unravel in 2025. Crucial Tests Are Coming in 2026.” Chatham House. https://www.chathamhouse.org/2025/12/global-security-continued-unravel-2025-crucial-tests-are-coming-2026.Summary: Reports the U.S. accusation that Russia launched a probable space weapon in September 2025 and warns that space will become more militarized with no meaningful governance treaties in place.

Council on Foreign Relations. (2026). “Nukes Without Limits? A New Era After the End of New START.” CFR. https://www.cfr.org/articles/nukes-without-limits-a-new-era-after-the-end-of-new-startSummary: Expert panel analysis documenting that the treaty’s absence eliminates commitments not to interfere with national technical means, the satellites and ground systems providing early warning of nuclear attack.

CSIS. (2025). “Returning to an Era of Competition and Nuclear Risk.” Center for Strategic and International Studies. https://www.csis.org/analysis/chapter-3-returning-era-competition-and-nuclear-riskSummary: Documents the convergence of adversarial nuclear expansionism, theater-range proliferation, adversary collusion, and weakening of U.S. alliance credibility as reshaping the strategic environment.

Federation of American Scientists. (2026). “The Aftermath: The Expiration of New START and What It Means for Us All.” FAS. https://fas.org/publication/the-expiration-of-new-start/Summary: Estimates the U.S. could add 400 to 500 warheads to its submarine force through uploading and documents funding cuts at State, NNSA, and ODNI that reduce capacity for follow-on agreements.

Federation of American Scientists. (2025). “A Risk Assessment Framework for AI Integration into Nuclear C3.” FAS. https://fas.org/publication/risk-assessment-framework-ai-nuclear-weapons/Summary: Proposes a standardized risk assessment framework for AI integration into NC3’s 200+ component system, identifying automation bias, model hallucinations, and exploitable software vulnerabilities as primary hazards.

ICAN. (2026). “The Expiration of New START: What It Means and What’s Next.” International Campaign to Abolish Nuclear Weapons. https://www.icanw.org/new_start_expirationSummary: Documents the February 5, 2026 expiration of the last remaining nuclear arms control agreement, noting that verification provisions had not been implemented since Russia’s 2023 suspension.

Just Security. (2026). “In 2026, a Growing Risk of Nuclear Proliferation.” Just Security, NYU School of Law. https://www.justsecurity.org/129480/risk-nuclear-proliferation-2026/Summary: Reports that South Korea and Saudi Arabia are poised to acquire fissile material production capabilities with U.S. support, increasing proliferation risk as the rules-based nuclear order collapses.

Lowy Institute. (2026). “New START Expired. Now What for Global Nuclear Stability?” The Interpreter. https://www.lowyinstitute.org/the-interpreter/new-start-expired-now-what-global-nuclear-stabilitySummary: Identifies the loss of transparency as the most immediate consequence of New START’s expiration, noting that verification regimes allowed each side to distinguish routine activities from destabilizing preparations.

Nuclear Threat Initiative. (2026). “The End of New START: From Limits to Looming Risks.” NTI.https://www.nti.org/analysis/articles/the-end-of-new-start-from-limits-to-looming-risks/Summary: Documents the loss of on-site inspections, data exchanges, and the Bilateral Consultative Commission as the treaty’s expiration removes caps on strategic forces for the first time in decades.

Stimson Center. (2026). “Top Ten Global Risks for 2026.” Stimson Center. https://www.stimson.org/2026/top-ten-global-risks-for-2026/Summary: Reports the Doomsday Clock at 89 seconds to midnight and identifies AI, offensive cyber, and anti-satellite weapons as creating new vulnerabilities for nuclear powers in a third nuclear era.