
From Analysis to Algorithms: Preserving the Art of Critical Thinking
By CMDR Paul D Pelczar, OAM RAN*
‘If nine of us who get the same information arrive at the same conclusion, it’s the tenth man’s responsibility to disagree.’
— ‘World War Z’ (2013, Paramount Pictures)
In the movie, Israel survives a global zombie outbreak by enforcing the ‘tenth man’ rule: if everyone agreed, one person was required to disagree to test assumptions and to ensure uncomfortable options were considered. It was dissent made part of the system.
The example was not fictional. After the 1973 Yom Kippur War, Israel recognised its intelligence community had been paralysed by consensus. Despite clear indicators of Egyptian and Syrian mobilisation, the prevailing view that no attack was imminent went unchallenged. Dissent became a duty; a structural safeguard against repeating the same mistake.
The lesson is clear. Intelligence is most at risk not when information is scarce, but when consensus takes hold. Groupthink narrows judgement and creates false comfort. With Artificial Intelligence (AI) and Machine Learning (ML) now embedded in the Intelligence Cycle, consensus may not just emerge; it may be produced at speed and scale. Without challenge, acceptance hardens into error.
Artificial Intelligence as a Force Multiplier
AI/ML raises efficiency but blurs the balance between human judgement and machine output, and fluent results can conceal error or bias beneath convincing, even sycophantic, prose. Ethical and legal boundaries remain unsettled. If AI begins to shape operations, responsibility must not drift.[1](1) Authority may be delegated, but accountability cannot be automated.
Yet analysis is not only about processing information but expressing acute judgement. Writing remains the act through which reasoning takes shape. When machines compose for us, the cognitive discipline of structuring argument, weighing evidence, and admitting uncertainty begins to erode.
Historical Warnings
Intelligence failure most often arises when dissent is ignored. Operation Market Garden (1944) was politically driven but strategically flawed: intelligence warned of German armour, but momentum carried the plan into failure. In contrast, Australian Coastwatchers, often operating outside command, were ignored before the fall of Rabaul in 1942. Their later reports from Bougainville and New Georgia gave vital early warning at Guadalcanal, allowing Allied fighters to scramble. One independent voice can outweigh consensus. As leaders once dismissed inconvenient reports, algorithms may now reinforce prevailing narratives and marginalise alternatives.
The Google Effect and the Google Mind
In 2011, Sparrow, Liu, and Wegner published research in ‘Science’ showing people were less likely to remember information if they knew it could be retrieved online. This ‘Google Effect’ shifted memory: people retained where to find information, not the information itself.[2](2) Wegner’s earlier work on transactive memory showed how groups share cognitive labour with each member specialising yet relying on collective knowledge.[3](3) Naval teams embody this naturally: the navigator, marine engineering officer, and flight commander each hold distinct expertise but as part of a Command team operate as one mind. That is healthy distributed cognition. What is emerging now is different: not shared understanding but the outsourcing of thought to systems that neither explain nor reason.
For intelligence, if AI/ML become the default, analysis risks outsourcing not only recall but the art of critical thought. These systems operate on precedent and probability. They cannot identify the unknown unknowns. This is the ‘Google Mind’: an intelligence culture that remembers how to search but has begun to outsource how to think.
Open-Source Intelligence, Manipulation, and Tradecraft
Open-source intelligence (OSINT) is now central to collection and assessment. In Ukraine, drones, smartphones, and commercial imagery have exposed Russia’s Black Sea Fleet and tracked its movements. Civil groups now contribute directly, building a picture once reserved for state services.
But OSINT is also exposed to manipulation through information operations and public-facing platform dynamics: Disinformation is deliberate falsehood; misinformation is falsehood spread unwittingly; malinformation is truth used out of context. Astroturfing manufactures fake ‘grassroots’ consensus. Sock puppets pose as real personas, polluting collection. Brigading and bot swarms overwhelm debate and skews sentiment analysis. Deepfakes add fabricated imagery and audio.
In the maritime domain, Russia has shaped narratives around Black Sea shipping and grain exports using selective claims and staged reports. Bot networks then amplify the story until it appears credible. This is the caution for AI: without verification and controls, systems risk repeating manufactured consensus.
Large Language Models and Information Assurance
Large Language Models (LLMs) offer speed and fluency but depend on data integrity: quality and resilience against manipulation. Without independent verification and corroboration, outputs remain unverified.
Adversaries can poison training data, saturate the environment to crowd out authentic signals, or bias retrieval so algorithms surface preferred narratives. Controls must match other systems: source validation, cross-cueing, audit trails. LLM outputs should be treated like any single-source report: useful, but unconfirmed until checked against independent streams.
In future operations, advantage may belong to those who decide fastest. The danger is mistaking velocity for accuracy. Machine fluency often conceals fragility. Deliberate, written, and compared, the craft of analysis remains the safeguard between pace and peril.
Authoritarian regimes exploit these weaknesses. Information operations saturate the digital environment until repetition itself creates weight. This is reflexive control updated: manipulating perception until the target acts in ways that serve the regime.
AI/ML also introduce challenges in classification and data sharing. Systems trained on unclassified or commercial data must still integrate with classified assessments across allied networks. Variations in policy, data standards, and release authorities complicate fusion. For partners, the task is to protect sources while maintaining interoperability.
Cultural Warnings
Popular culture has long carried the caution. ‘2001: A Space Odyssey’ (1968, MGM) gave us HAL: calm, fluent, unaccountable. ‘The Terminator’ (1984, Orion Pictures) gave us Skynet: fast, decisive, without restraint. ‘The Matrix’ (1999, Warner Bros.) offered the red pill, the discomfort of truth; and the blue pill, the comfort of illusion.
AI may provide fluent consensus, but the red pill remains the choice to question, to contest, and to face uncertainty rather than accept illusion. Machines may deliver answers, but final judgement remains a human duty.
Guardrails and the Last Watch
The ‘tenth man’ must be more than an allegory. In naval terms, it becomes the ‘Last Watch’: the assigned responsibility to contest consensus when others do not.[4](4) Contestability must be embedded, encouraged, and defended.
- Red teams for AI outputs with mandated checks to probe assumptions.
- Structured techniques i.e. competing hypotheses, devil’s advocacy, pre-mortems.
- Recognising and ameliorating human and algorithmic bias in assessment.
- Algorithmic transparency revealing inputs, limits, and underlying assumptions.
- Institutional contestability by assigning authority to challenge assessments.
- Adversarial testing by exercising data against poisoning and manipulation.
Trust is earned when outputs survive these tests and align with independent sources. Only then should AI-assisted assessments move forward.
Conclusion: Standing the Last Watch
AI/ML will undoubtedly increase productivity and shorten timelines across the cycle. But they cannot predict the unexpected. They will not dissent. The Google Effect showed how memory shifted in the digital age. Left unchecked, the Google Mind may shift analysis itself from questioning to accepting, while adversaries flood the environment with manipulation until distortion passes as truth.
To preserve independent thought, dissent must be mandated and authorised. Tradecraft must be reinforced. Outputs must be tested, not trusted by default. In ‘World War Z’, survival began when one man was ordered to disagree. After Yom Kippur, Israel made contestability a duty. In intelligence today, that duty is the Last Watch; the refusal to yield final judgement to machines or algorithms. It is the human safeguard when certainty is most seductive and most dangerous. Even under the pressure of real-time decisions, where speed can save lives or drive the initiative, the duty to question remains. To remain silent in the face of error is not caution; it is complicity.
*CMDR Pelczar, has spent his career attempting to refine his ability for thoughtful writing. A naval officer with broad experience in information warfare and intelligence, he fears that the discipline of careful construction of argument, recommendation, and debate risks being displaced by machine prose blandness.
[1] (1) Royal Australian Navy. “Robotics, Autonomous Systems and Artificial Intelligence Strategy 2040.”
https://www.navy.gov.au/about-navy/strategic-planning/robotics-autonomous-systems-artificial-intelligence-strategy
[2] (2) Betsy Sparrow, Jenny Liu, and Daniel M. Wegner. “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science 333, no. 6043 (July 14, 2011): 776–78. https://doi.org/10.1126/science.1207745
[3] (3) Wegner, Daniel M. “A Computer Network Model of Human Transactive Memory.” Social Cognition 13, no. 3 (1995): 319–339.
[4] (4) ‘Last Watch’: A navalised version of the ‘tenth man’: a mandated, rank-protected dissent role in key assessments.