Why ChatGPT is not an intelligence analyst?
27 November 2024
While AI models like ChatGPT demonstrate impressive capabilities in natural language processing and generating human-like text, their application in intelligence analysis is fundamentally limited. This briefing explains why ChatGPT and similar large language models (LLMs) are unsuitable for intelligence analysis, emphasising the differences between human analytical processes, as conducted by Risk Intelligence, and LLM capabilities.
By Kristian Bischoff, Analyst
LLMs such as ChatGPT are by design incapable of original analysis. ChatGPT operates by predicting text based on patterns in existing data, meaning any "analysis" it offers is drawn from existing written sources. This leads to three critical issues: a reliance on potentially outdated or incomplete perspectives, the risk of unintentional plagiarism, and reinforcement loops in case some information is repeated more than other information. Unlike human analysts, ChatGPT cannot synthesise information from diverse sources to create a coherent and novel interpretation for a specific threat situation. In this way, ChatGPT is the analytical version of the broken clock – it may be right on rare occasions, but it will be pure coincidence.
Intelligence analysis involves rigorous methodological processes: planning, collecting, analysing, and presenting intelligence products in a structured and thoughtful manner. Human analysts engage in critical thinking to evaluate threat levels, understand complex threat dynamics, and make nuanced judgment calls. For example, determining that “Threat X” is at “Level Y” requires not only data but also an understanding of the context, biases, and implications of that judgment. This is an inherently human skill that LLMs cannot replicate.
Moreover, the output of such AI models must always be verified by human analysts to ensure accuracy and reliability. This extra layer of review negates the efficiency LLMs are often presumed to provide. Integrity and methodological rigor are central to intelligence work, and these cannot be outsourced to systems that lack accountability, reasoning, or the ability to explain their conclusions – especially when the safety and security of real human personnel is on the line.
Another major limitation of ChatGPT is its inability to collect and evaluate information independently. Currently, ChatGPT functions as a passive repository of knowledge, drawing exclusively from its training data, which is static and historical. This raises significant concerns about the completeness, recency, and reliability of its information.
A central aspect of intelligence work is understanding where information comes from—its provenance and reliability. ChatGPT, however, operates as a "black box": users cannot trace the specific sources of its outputs. The opaqueness of its algorithms, compounded by potential algorithmic biases, creates challenges in trusting its outputs. For example, in contested information environments related to the conflicts in Ukraine and the Middle East, accurately assessing information requires understanding the credibility and intent of sources. ChatGPT cannot evaluate or contextualise sources in these dynamic and nuanced scenarios.
While ChatGPT and similar LLMs can be useful for summarising information, they are fundamentally unsuited for independent intelligence analysis. Additionally, the opacity of AI models and their reliance on static, unverified datasets further limit their useability in intelligence collection and analysis. As a result, any use of AI in intelligence work must be accompanied by stringent human oversight and verification to maintain the integrity and reliability of intelligence products. It is not the silver bullet to analysis and an easy fix to get threat assessments for particular maritime operations, and might not be for the foreseeable future.
INTELLIGENCE REPORTS:
Elevate your risk management with Risk Intelligence Reports
Unlock unparalleled insights with Risk Intelligence’s comprehensive range of reports. Whether you're navigating complex maritime routes or assessing security threats, our tailored reports provide critical information to enhance your strategic decisions.