How Military Institutions Choose Catastrophe Over Accountability, and Why the Dead Stay Dead
In 1984, an aircraft carrier ran over a nuclear submarine and both navies called it one man’s fault. In 1988, a cruiser shot down a passenger jet and the Navy called it stress. In 2003, a missile battery killed three allied aircrew and the Army called it a training issue. In 2020, a hundred and nine soldiers died at a single base and the Army called it a societal problem. The individual absorbs the blame. The doctrine walks free. The dead stay dead. This paper names the mechanism.
The Fallacy: The Isolated Incident
The fallacy is that military catastrophes are discrete events with discrete causes. A captain’s bad judgment. A radar operator’s misreading. A software glitch. A failure of training. Each incident investigated in isolation. Each investigation finding a proximate cause. Each proximate cause assigned to an individual or a component. Each individual disciplined or decorated, depending on which narrative the institution needs. And then the institution resumes exactly the operation that produced the catastrophe, because the operation was never investigated. Only the incident was.
This is not incompetence. It is architecture. The military investigation system is structurally designed to find proximate causes and stop. A board convenes. It establishes the sequence of events. It identifies what went wrong at the point of failure. It assigns responsibility. It recommends corrective action, almost always training, procedures, or personnel changes. What it does not do, what it is not designed to do, what it is institutionally prevented from doing, is follow the causal chain past the individual and into the doctrine, the incentive structure, the acquisition pipeline, and the operational culture that put the individual in the position to fail. The investigation stops at the person because the institution begins at the doctrine, and the doctrine is not on trial.
Identify the Center of Gravity: The Collision Compact
In a companion paper published today, Blind Man’s Bluff at 30 Knots, we defined the Collision Compact as the unspoken bilateral agreement between adversary navies to accept catastrophic proximity as a cost of doing business, to treat the resulting incidents as individual failures rather than systemic products, and to preserve the doctrine that generates those incidents because no institution can afford to admit the game itself is the problem.
The Compact has three structural components: mutual escalation, in which the system generates risk because the system is designed to generate risk; mutual silence, in which the institution minimizes the incident, classifies the details, and controls the narrative; and mutual scapegoating, in which the individual absorbs the blame that belongs to the doctrine.
The Collision Compact is not a naval phenomenon. It is not a Cold War artifact. It is the operating logic of institutional risk across every military domain where competitive pressure, technological complexity, and accountability structures intersect. The evidence runs through four decades, four services, and four distinct failure modes. The players change. The Compact does not.
Converge the Silos
The Machine That Lied: USS Vincennes and Iran Air Flight 655
On 3 July 1988, USS Vincennes, a Ticonderoga-class Aegis cruiser under Captain Will Rogers III, fired two SM-2MR missiles at what the crew believed was an Iranian F-14 Tomcat descending on an attack profile. The target was Iran Air Flight 655, an Airbus A300 on a scheduled commercial route from Bandar Abbas to Dubai, climbing through its assigned airway, squawking the correct civilian transponder code. All 290 people aboard were killed, including 66 children.
The Aegis Combat System’s own data tapes recorded the aircraft climbing on a normal commercial profile. The crew reported it descending. The system’s software had recycled the flight’s tracking number and reassigned it to a U.S. Navy jet over the Gulf of Oman that actually was descending. The Aegis user interface was known to be deficient. Every test of the system had shown errors. The dark CIC, the flickering lights from gunfire, the confusion over time zones in the flight schedules, the one-minute decision window: all of these were products of the system, not the captain. RCA, the system developer, was not mentioned once in the 153-page investigation report.
USS Sides, operating nearby under Commander David Carlson, correctly identified Flight 655 as commercial the entire time. Carlson later wrote that the Vincennes crew “felt a need to prove the viability of Aegis in the Persian Gulf, and that they hankered for an opportunity to show their stuff.” The ship had earned the nickname “Robo Cruiser.” Rogers received the Legion of Merit. The Aegis interface flaws that produced the misidentification were not redesigned before the next deployment. The Navy’s conclusion: human error under stress. The machine walked free.
The Machine That Killed Its Own: Patriot Fratricide, Iraq 2003
Fifteen years later, the same architecture produced the same outcome in a different weapons system. During the opening days of Operation Iraqi Freedom, the Patriot missile system committed three fratricide incidents in ten days. On 23 March 2003, a Patriot shot down an RAF Tornado GR4A, killing Flight Lieutenants Kevin Main and David Williams. The system’s IFF failed because the battery did not have the correct Mode 1 codes loaded. The crew had one minute to decide. On 24 March, a Patriot radar locked onto a U.S. Air Force F-16. The pilot, believing he was being targeted by an Iraqi SAM, fired a HARM anti-radiation missile and destroyed the Patriot battery. On 2 April, a Patriot shot down a U.S. Navy F/A-18C, killing Lieutenant Nathan White. The system generated false ballistic missile trajectories when multiple Patriot radars tracked the same aircraft.
Twenty-five percent of the Patriot’s total engagements in Iraq were against friendly aircraft. The Defense Science Board found that the IFF problems had surfaced during training exercises before the war. A 1993 test found that when IFF failed, Patriot batteries fired on friendly aircraft 50 percent of the time. A 1996 National Research Council report called the simulated fratricide results “disturbing.” The problems were known. The problems were documented. The problems were not fixed before deployment, because fixing them would require acknowledging that the system’s autonomous engagement mode, the feature that made Patriot fast enough to intercept ballistic missiles, was also fast enough to kill friendly pilots. Raytheon declined to discuss the misidentifications. The Army’s response: more operator training. An F-16 pilot summed it up: the Patriots scared him more than any Iraqi SAM.
One detail tells the whole story. The investigation found that if Patriot crews waited 60 seconds after target acquisition before firing, the likelihood of fratricide would decrease by 86 percent without allowing any hostile aircraft to slip through. Sixty seconds. The lives of Main, Williams, and White were worth less than a minute of patience that the doctrine did not permit and the machine did not require.
The Exercise That Nearly Ended Everything: 1983
Three events in eleven weeks nearly ended civilization. On 1 September 1983, Soviet fighters shot down Korean Air Lines Flight 007, killing 269 people, because doctrine said shoot first and verify later. On 26 September, the Soviet early-warning system Oko reported five American ICBMs inbound. Lieutenant Colonel Stanislav Petrov, on duty at Serpukhov-15, judged the alarm a malfunction. He was right. Sunlight on high-altitude clouds had fooled the satellites. Petrov was punished for not following protocol. The man who prevented nuclear war was disciplined for disobeying the system that nearly started one.
Then on 7 November, NATO began Able Archer 83, a command-post exercise simulating the transition from conventional to nuclear war. Unlike previous years, this exercise moved forces through all alert phases to DEFCON 1, used new communication formats, introduced radio silence, and included references to B-52 nuclear strikes. The Soviets, raw from KAL 007 and the Petrov incident, interpreted the exercise as cover for a first strike. Marshal Kutakhov ordered the Soviet 4th Air Army to prepare for immediate use of nuclear weapons. Combat aircraft were loaded with actual nuclear bombs. Submarines deployed under Arctic ice. The entire Soviet arsenal, 11,000 warheads, went to maximum combat alert.
Lieutenant General Leonard Perroots, the senior U.S. intelligence officer overseeing the exercise, chose not to escalate. The President’s Foreign Intelligence Advisory Board later called this a “fortuitous, if ill-informed, decision.” He acted on instinct, not guidance, because no guidance existed for the scenario. U.S. commanders on the scene were not aware of any pronounced superpower tension. The Soviet activities were not seen in their totality until long after the exercise was over. The PFIAB concluded: “In 1983 we may have inadvertently placed our relations with the Soviet Union on a hair trigger.” The world survived because one Soviet lieutenant colonel disobeyed orders and one American general trusted his gut. Neither man received guidance from the system that nearly killed everyone. The system was not changed.
The Bases That Eat Their Own: Fort Hood and Fort Bragg
The Collision Compact does not require an adversary. It operates with equal efficiency when the institution is the threat and its own people are the targets. In 2020, twenty-eight soldiers died at Fort Hood, Texas. Specialist Vanessa Guillén, twenty years old, was sexually harassed and then murdered by a fellow soldier. The Army listed her as AWOL. Her dismembered remains were found in shallow graves twenty miles from base. The independent review found a command climate “permissive of sexual harassment and sexual assault,” with incidents “significantly underreported.” The chain of command was fired. Congress investigated. The UCMJ was amended.
Then nothing happened at Fort Bragg, which was worse. One hundred and nine soldiers died in 2020 and 2021. Forty-one by suicide. Twenty-one by drug overdose. Only seven in combat or training. Ninety-six percent of the deaths occurred stateside. The base’s own numbers did not match the data investigative journalists obtained from the Army’s Human Resources Sustainment Center. Fort Hood, with fewer deaths, got a congressional investigation and a chain of command firing. Fort Bragg, home to Delta Force and the 82nd Airborne, got nothing. Congress did not investigate. The chain of command remained. The base was left to police itself.
The mechanism is identical. The institution generates risk through operational tempo, deployment cycles, inadequate mental health infrastructure, and a culture that treats seeking help as weakness. When soldiers die, the institution classifies the deaths as individual failures: AWOL, overdose, suicide, accident. The systemic causes, the culture, the tempo, the institutional incentives that reward silence, are not investigated because investigating them would require the institution to admit the game is the problem. Mutual escalation: the tempo increases because the mission demands it. Mutual silence: the deaths are minimized, the data is withheld, the families are stonewalled. Mutual scapegoating: the dead soldier bears the diagnosis. The doctrine walks free.
The Pattern That Does Not Change
Four case studies. Four decades. Four services. Four failure modes: a cruiser that trusted a machine over the evidence on its own screens, a missile battery that killed the pilots it was built to protect, an exercise that nearly triggered the war it was designed to simulate, and military bases that killed more of their own soldiers than the enemy did. In every case, the systemic failure was visible before the catastrophe. The Aegis interface flaws were documented in testing. The Patriot IFF failures surfaced in exercises. The Able Archer escalation risk was predictable from the exercise design. The Fort Hood culture was reported by soldiers for years before Guillén’s murder.
In every case, the warning was present, reported, and ignored. Not because the people in the system were stupid. Because the system is not designed to process warnings that indict the system. A near-miss report that blames a radar operator gets filed and actioned. A near-miss report that blames the Aegis user interface design gets routed to the contractor, who has a $3.5 billion revenue stream dependent on not redesigning the interface. A near-miss report that blames the operational tempo gets routed to the combatant commander, whose career depends on maintaining the tempo. The report dies in the routing. The next catastrophe arrives on schedule.
Propose the Doctrine: Five Pillars
Pillar 1: Independent Systemic Investigation Authority. The military investigation system finds proximate causes because it is designed to find proximate causes. An independent investigation authority, modeled on the National Transportation Safety Board, with access to classified operational data, acquisition records, and contractor communications, and the mandate to publish systemic findings without requiring the service’s concurrence, is the only mechanism that breaks the Compact. The NTSB model works in aviation because the investigating body is not the operating body. Asking the Army to investigate Fort Bragg is asking the institution to indict its own culture. Forty years of identical recommendations prove that will not happen voluntarily. The authority must be external, permanent, and funded independently of the services it investigates.
Pillar 2: Mandatory Causal Chain Extension. Every Class A mishap investigation must follow the causal chain past the individual to the doctrine, the incentive structure, the acquisition decision, and the operational culture that created the conditions for failure. If a Patriot battery kills a friendly aircraft because the IFF codes were not loaded, the investigation does not stop at the battery commander. It follows the chain to the software architecture that required manual code loading, to the acquisition decision that accepted that architecture, to the contractor who built it, and to the testing regime that documented the failure and did not require a fix before deployment. The chain does not stop until it reaches the structural cause. Stopping at the proximate cause is the mechanism by which the Compact preserves the doctrine.
Pillar 3: Near-Miss Intelligence Mandate. The military generates thousands of near-misses for every catastrophe. Near-misses are free lessons. A Class A mishap costs lives, equipment, careers, and institutional credibility. A near-miss costs nothing except the willingness to report it. The current system treats near-miss reporting as voluntary and stigmatized. An Army aviation safety inspector returning from the civilian sector in 2024 found the same lack of near-miss reporting he had observed when he left the Army two decades earlier. The civilian aviation safety culture, built on the Aviation Safety Reporting System since 1976, captures near-misses with confidential, non-punitive reporting that feeds directly into system design and operational procedure. The military equivalent does not exist at scale. Building it is cheaper than burying the next crew.
Pillar 4: Contractor Accountability in Mishap Findings. RCA was not mentioned in the Vincennes investigation. Raytheon declined to discuss the Patriot fratricides. The investigation system treats the weapons system as a given and the operator as the variable. This inverts the actual causal relationship. When the Aegis system recycles tracking numbers in a way that causes operators to misidentify targets, the system is the cause and the operator is the symptom. When the Patriot’s autonomous engagement mode fires on friendly aircraft because IFF failed, the engagement mode is the cause and the crew is the symptom. Contractor performance must be a mandatory finding in every Class A mishap involving a weapons system. The contractor’s revenue stream from the system must not insulate the contractor from accountability for the system’s contribution to the failure. A weapons system that kills friendly forces at a 25 percent engagement rate is not a training problem. It is a design problem with a corporate address.
Pillar 5: Institutional Culture as an Investigable Domain. Fort Hood’s “permissive” culture of sexual assault was not invisible. It was reported by soldiers, documented in surveys, and ignored by commanders for years before Guillén’s murder. Fort Bragg’s death rate exceeded Fort Hood’s for two years running without triggering a congressional investigation. The Vincennes’s aggressive reputation was known fleet-wide. The Collision Compact survives because institutional culture is treated as a background condition rather than an investigable cause. Culture is not weather. Culture is the product of incentive structures, promotion criteria, operational tempo decisions, and command emphasis, all of which are policy choices made by identifiable leaders. When a base’s soldiers die at rates that exceed the combat theater, the culture that produced those deaths must be investigated with the same rigor as a Class A aviation mishap, by an authority with the power to compel testimony, access records, and publish findings that the institution cannot suppress.
Closing Assessment
The Collision Compact is not a theory. It is a description of observed behavior across four decades, four services, and four failure modes, tested against the evidence and found consistent in every case. The mechanism is simple. The institution generates risk through doctrine, tempo, technology, and culture. When the risk produces a catastrophe, the institution investigates the catastrophe but not the risk. The individual at the point of failure absorbs the blame. The doctrine resumes. The next catastrophe arrives. The dead do not file appeals.
Captain Evseenko was relieved for being under an aircraft carrier that knew he was there. Lieutenant Colonel Petrov was punished for preventing a nuclear war. Captain Rogers was decorated for shooting down a passenger jet. The Patriot crews were retrained after killing allies with a system that failed in testing. A hundred and nine soldiers died at Fort Bragg and the United States Congress did not notice. The pattern is not a coincidence. It is a compact, maintained by institutions that cannot afford to break it, enforced by investigation systems that are not designed to challenge it, and paid for by the people at the bottom of the chain who absorb the consequences of decisions made at the top.
The five pillars proposed here are not aspirational. They are mechanical. An independent investigation authority. Mandatory causal chain extension. Near-miss intelligence at scale. Contractor accountability in mishap findings. Institutional culture as an investigable domain. Each one breaks a specific structural element of the Compact. Together, they create a system in which the doctrine, not just the individual, faces the evidence.
The dead at every one of these incidents had names. They had families. They were doing what the institution told them to do, in the way the institution told them to do it, with the equipment the institution gave them. They did not fail the system. The system failed them. And then the system investigated itself, found the usual suspects, filed the usual report, and resumed the usual operations.
The game continues. The Compact holds. The dead stay dead.
Resonance
Arms Control Center. (2022). “The Soviet False Alarm Incident and Able Archer 83.” Center for Arms Control and Non-Proliferation. https://armscontrolcenter.org/the-soviet-false-alarm-incident-and-able-archer-83/. Summary:Analysis of the September 1983 Oko false alarm and the subsequent Able Archer 83 exercise, including Petrov’s decision, Soviet military mobilization, and the PFIAB’s conclusion that the U.S. had placed relations on a hair trigger.
Cox, Samuel J. (2018). “USS Vincennes Tragedy.” Naval History and Heritage Command, H-Gram 020. https://www.history.navy.mil/content/history/nhhc/about-us/leadership/director/directors-corner/h-grams/h-gram-020/h-020-1-uss-vincennes-tragedy–.html. Summary: NHHC Director’s authoritative account of the Iran Air 655 shootdown, including the Aegis tracking number reassignment, Commander Carlson’s identification of the aircraft as commercial, and the CIC conditions that contributed to the misidentification.
Harp, Seth. (2023). “These Kids Are Dying: Inside the Overdose Crisis Sweeping Fort Bragg.” Rolling Stone. https://www.rollingstone.com/culture/culture-features/inside-the-overdose-crisis-sweeping-fort-bragg-1396298/.Summary: Investigative reporting documenting 109 soldier deaths at Fort Bragg in 2020–2021, the Army’s stonewalling of families, discrepancies between base-reported and centrally recorded casualty data, and the absence of congressional oversight.
Hawley, John K. (2017). Cited in Sisson, Melanie, et al. (2022). “Understanding the Errors Introduced by Military AI Applications.” Brookings Institution. https://www.brookings.edu/articles/understanding-the-errors-introduced-by-military-ai-applications/. Summary: Analysis of Patriot missile fratricide incidents in 2003, the role of autonomous engagement modes, the RAF Board of Inquiry findings, and engineering psychologist Hawley’s conclusion that humans are poorly suited to monitoring autonomous weapons systems.
Kaplan, Fred. (2021). “Able Archer 1983: The World Came Much Closer to Nuclear War Than We Realized.” Slate. https://slate.com/news-and-politics/2021/02/able-archer-nuclear-war-reagan.html. Summary: Reporting on newly declassified documents revealing that Soviet forces loaded actual nuclear bombs onto combat aircraft during Able Archer 83, a fact not publicly known until the 2021 FRUS volume release.
Lerner, Eric J. (1989). Cited in “Overwhelmed by Technology: An Analysis of the Technological Failures at USS Vincennes.” Stanford University. https://xenon.stanford.edu/~lswartz/vincennes.pdf. Summary: Technical analysis of the Aegis Combat System’s user interface deficiencies, including the tracking number recycling flaw, the IFF correlation errors, and the finding that every test of the system had shown errors prior to the Iran Air 655 shootdown.
MIT Technology Review. (2005). “Preventing Fratricide.” https://www.technologyreview.com/2005/06/01/230882/preventing-fratricide/. Summary: Investigation of Patriot system failures in Iraq 2003, Raytheon’s $3.5 billion revenue stream, the Defense Science Board’s findings on IFF deficiencies known before deployment, and MIT physicist Theodore Postol’s critique of the program’s failure to identify and fix problems.
National Security Archive. (2021). “Able Archer War Scare ‘Potentially Disastrous.’” George Washington University. https://nsarchive.gwu.edu/briefing-book/aa83/2021-02-17/able-archer-war-scare-potentially-disastrous. Summary:Declassified documents including Lt. Gen. Perroots’s end-of-tour report, the NSA message confirming Soviet 4th Air Army preparations for nuclear weapons use, and the PFIAB investigation findings on the 1983 war scare.
Nuclear Museum. (n.d.). “Nuclear Close Calls: Able Archer 83.” Atomic Heritage Foundation. https://ahf.nuclearmuseum.org/ahf/history/nuclear-close-calls-able-archer-83/. Summary: Historical account of the Able Archer exercise, Soviet military responses including nuclear weapons loading and submarine deployment, and the finding that U.S. commanders were not aware of the crisis until long after the exercise ended.
Rolling Stone. (2024). “U.S. Army Audit Says Army Is Ignoring Its Own Policies to Protect Soldiers.” Rolling Stone. https://www.rollingstone.com/politics/politics-features/army-missing-soldiers-audit-1235101245/. Summary:Investigation documenting the Army’s failure to implement its own personnel protection policies, the pattern of listing missing soldiers as AWOL, the Fort Hood independent review findings, and the ongoing absence of accountability mechanisms at Fort Bragg.
UPI. (2003). “The Patriot’s Fratricide Record.” https://www.upi.com/Defense-News/2003/04/24/Feature-The-Patriots-fratricide-record/63991051224638/. Summary: Detailed technical reporting on Patriot fratricide history, the 1993 simulation showing 50 percent fratricide rate when IFF failed, the 1996 National Research Council findings, and the 60-second delay that would have reduced fratricide by 86 percent.
UPI. (2004). “UK Faults Self and US for Plane Shootdown.” https://www.upi.com/Defense-News/2004/05/14/UK-faults-self-and-US-for-plane-shootdown/30351084548727/. Summary: RAF Board of Inquiry conclusions on the Tornado shootdown, including the IFF power failure, the missing Mode 1 codes, the one-minute decision window, and the finding that a brief delay in firing would have prevented the deaths.