A Comprehensive Vulnerability Assessment of the Lattice AI Platform: An Analysis of Technical, Operational, and Strategic Weaknesses

Illustration of a malfunctioning AI brain highlighting vulnerabilities in military systems.

Executive Summary

This report provides a comprehensive vulnerability assessment of a “Lattice-like” AI-powered command and control platform. Such a platform is an advanced, software-defined operating system designed to fuse sensor data and coordinate autonomous military assets. This analysis moves beyond isolated technical flaws to present an integrated view of the platform’s weaknesses across technical, operational, systemic, human, and strategic domains. It argues that the platform’s core strengths—speed, autonomy, and data fusion—are also the source of its most profound and interconnected vulnerabilities.

Key Findings

  • Algorithmic and Data-Centric Vulnerabilities: The platform’s AI core is susceptible to data poisoning, adversarial deception, and inherent bias. These can corrupt its decision-making integrity at a foundational level. The reliance on a complex software supply chain, including open-source components, creates additional vectors for compromise. ³⁴ ¹⁰⁸
  • Operational and Network-Layer Threats: In the field, the system is vulnerable to electronic warfare, sensor spoofing (particularly of GNSS signals), and logical attacks on its decentralized mesh network. These attacks can sever its connection to reality and render its algorithms useless or dangerous. ⁵⁴ ⁹⁷
  • Systemic and Architectural Flaws: The platform’s hardware-agnostic and multi-vendor design, while flexible, introduces “brittleness” and critical security gaps at integration “seams.” This was demonstrated by the real-world deficiencies found in the Next Generation Command and Control (NGC2) prototype.¹ ¹⁵ ⁴⁵ ⁶¹ ⁷⁵ ¹⁰⁹ ¹⁴² ¹⁴⁹ The system’s complexity can also lead to unpredictable and dangerous emergent behaviors.²² ¹⁰³ ¹¹⁶
  • Human, Ethical, and Legal Failures: The system’s speed and opacity challenge meaningful human control by inducing automation bias, a phenomenon implicated in historical incidents like the 2003 Patriot missile fratricides.³⁰ ⁷² ⁹⁵ ⁹⁶ ¹⁰⁵ This creates a legal “accountability gap” and poses significant challenges to compliance with International Humanitarian Law.⁴ ⁵ ²⁴
  • Strategic and Dual-Use Risks: The core surveillance and data-fusion technologies are inherently dual-use. This poses a risk of them being repurposed for domestic oppression.³¹ ⁵⁶ The proliferation of such advanced autonomous capabilities also risks triggering a new, destabilizing global arms race.²³ ⁵⁵ ⁸⁸ ¹¹² ¹²⁴ ¹²⁶ ¹⁷⁷ ¹⁸⁶

The report concludes that these weaknesses are not isolated. They exist in a causal chain where a failure in one domain can cascade and lead to catastrophic outcomes. To mitigate these risks, this assessment proposes a series of strategic recommendations. These include mandating continuous adversarial testing, investing in operationally-focused Explainable AI (XAI), enforcing a Zero Trust architecture, overhauling operator training to focus on cognitive skills, and reforming acquisition processes to prioritize holistic security and reliability. The report also highlights the challenges associated with implementing these mitigations and suggests areas for future research, emphasizing the need for continuous adaptation to the evolving threat landscape.

Introduction

The introduction of AI-powered, software-defined platforms into military operations represents a fundamental paradigm shift. It moves command and control (C2) from a hardware-centric model to a dynamic, data-driven ecosystem. The Lattice AI platform, an operating system for autonomous warfare, epitomizes this transformation.

Lattice is designed as a hardware-agnostic, decentralized system. It fuses vast streams of sensor data, identifies threats, and coordinates the actions of autonomous assets—such as drone swarms and unmanned ground vehicles—at machine speed. This capability promises to compress decision cycles, accelerate operational tempo, and provide an asymmetric advantage on the modern battlefield.¹⁴³ ¹⁸⁶

However, this report posits that the very features that are the source of the platform’s strengths are also the origins of its most profound weaknesses. The reliance on data fusion creates a vulnerability to data corruption. The pursuit of speed challenges meaningful human oversight. The goal of autonomy introduces legal and ethical accountability gaps. And the decentralized network architecture opens new vectors for systemic electronic attack. The platform is not merely a tool with potential flaws; it is a complex adaptive system whose vulnerabilities are emergent properties of its core design.

This comprehensive assessment will deconstruct the multifaceted weaknesses of a Lattice-like AI platform. It will move beyond a simple catalog of vulnerabilities to an integrated analysis of their causal relationships and strategic implications. The analysis will proceed outward from the platform’s core to its operational context, examining:

  • Vulnerabilities inherent in its algorithms and data pipelines.
  • Threats it faces in contested physical and electromagnetic environments.
  • Systemic flaws rooted in its complex, multi-vendor architecture.
  • Critical challenges it poses to human control, ethics, and legal compliance.
  • Long-term strategic risks associated with its dual-use nature and potential for global proliferation.

This report is intended to provide senior defense strategists, policymakers, and acquisition officials with a definitive framework for understanding and mitigating the full spectrum of risks associated with this transformative military technology.

Section 1: Algorithmic and Data-Centric Vulnerabilities: The Corruptible Core

The cognitive core of the Lattice platform is its suite of artificial intelligence and machine learning models. These models are responsible for perception, analysis, and decision-making. The integrity of this core depends entirely on the data it consumes, both during its initial training and its operational deployment.

This section dissects the weaknesses inherent in these models and their data pipelines. It argues that the platform’s decision-making integrity is perpetually at risk from subtle, difficult-to-detect manipulations of its foundational data. These vulnerabilities are not peripheral bugs but fundamental challenges to the system’s reliability in mission-critical scenarios. The algorithmic flaws detailed here create the foundational risks that are then amplified by the operational and systemic weaknesses discussed in subsequent sections.

1.1 The Fragility of the Data Foundation: Poisoning and Pollution

The performance of any AI system is inextricably linked to the quality and integrity of the data used to train and operate it. For a military data fusion platform like Lattice, which ingests immense volumes of information from diverse sources, this dependency is a critical vulnerability.

Poor quality data, whether from unintentional errors or malicious intent, can yield dangerously flawed decisions. This could lead to mission failure, nuclear sabotage, or personnel injury.¹⁶² Adversaries systematically exploit this vulnerability through data poisoning. This is a class of adversarial attack where an actor intentionally corrupts a model’s training data to control or degrade its behavior after deployment.¹³ ⁸⁶

The methods of data poisoning are varied and sophisticated, presenting a formidable challenge to defensive measures. Attacks can be broadly categorized by their objective and methodology:

  • Indiscriminate vs. Targeted Attacks: An indiscriminate attack aims to degrade the model’s overall accuracy and reliability, sowing general distrust. A more insidious targeted attack is designed to cause a specific, predictable failure under certain conditions, such as misclassifying a friendly vehicle as hostile.¹³
  • Dirty-Label vs. Clean-Label Attacks: A “dirty-label” attack involves deliberately mismatching an input with its label (e.g., labeling an image of a friendly tank as an “enemy APC”). A far more dangerous “clean-label” attack involves making subtle, almost imperceptible modifications to the input data itself while keeping the correct label. The poisoned data appears valid to a human analyst but is engineered to corrupt the model’s internal logic.¹³
  • Direct vs. Indirect Attacks: A direct attack involves altering data within the secure training pipeline, requiring insider access or a breach. An indirect attack is a more scalable threat, especially for a platform that relies on Open-Source Intelligence (OSINT).¹⁶² In this scenario, an adversary places malicious content on public websites, forums, or social media, knowing it will be scraped into a future training dataset.¹³

Defending against these attacks is a monumental task. It requires a multi-layered strategy that goes far beyond simple data validation. Effective defense necessitates continuous monitoring of data pipelines, advanced anomaly detection, and robust data management practices.⁸⁶ However, the sheer scale and velocity of data ingested by military systems make a complete validation of every data point practically impossible.⁸

1.2 Adversarial Deception: Manipulating Perception at the Input Layer

While data poisoning targets a model during training, adversarial deception attacks target a fully trained and deployed model. These attacks, also known as evasion attacks, involve crafting special inputs—”adversarial examples”—that fool the model into making an incorrect prediction.⁴¹ These inputs often contain subtle alterations that exploit vulnerabilities in the model’s logic, causing it to misclassify what it “sees”.⁷⁶ For a platform like Lattice, this represents a fundamental threat to its operational reliability.

Evasion attacks can be non-targeted, simply causing any incorrect output, or targeted, forcing a specific incorrect output, such as classifying a civilian vehicle as a military target.⁴¹ The latter is particularly dangerous.

A critical weakness is the emergence of physical attack vectors. An adversary can deploy specially designed objects, patterns, or even modified uniforms into the physical environment to disrupt the AI’s perception systems.⁶⁰ This blurs the line between cyber warfare and traditional military deception. For instance, researchers have shown how small stickers on a stop sign can cause an autonomous vehicle’s AI to misclassify it.⁶⁰ In a military context, an adversary could develop vehicle camouflage specifically engineered to be misclassified by object recognition models.

The threat is further amplified by generative adversarial attacks. Here, an adversary uses their own AI models to generate hyper-realistic fake imagery, video, or sensor data to fool surveillance systems.³¹ ⁶⁰ This creates an adaptive, learning-based threat where the adversary can continuously refine their deceptive inputs to bypass defensive measures.

1.3 Inherent and Amplified Bias: Flawed Logic and Unjust Outcomes

Algorithmic bias is a pervasive weakness in AI systems that can arise at any stage of the development lifecycle: from the training data, to the design choices, to the context of use.¹ In a military context, these biases can have catastrophic consequences.¹ ¹¹⁸

The foundation of this vulnerability lies in the training data. Military datasets are often narrow, representing specific conflicts or environments. An AI model trained exclusively on data from desert warfare may perform poorly in an urban or arctic setting.¹ This “brittleness”—the inability to adapt to new circumstances—is a significant operational risk.¹ ⁹¹

Furthermore, the data may contain implicit biases. If historical surveillance data used for training disproportionately shows individuals of a certain ethnicity in a hostile context, the AI model will learn and codify this bias. A biased model could wrongfully assess individuals to be combatants based on their age, gender, or skin tone, leading to the illegal targeting of non-combatants.¹

This problem is compounded by “automation bias.” This is the human tendency to excessively trust and defer to the recommendations of an automated system, especially under pressure.¹ An operator may be inclined to accept a high-confidence targeting solution without sufficient scrutiny, thereby amplifying the real-world impact of any underlying algorithmic bias.

1.4 The Compromised Supply Chain: Third-Party and Open-Source Risks

The Lattice platform is not a monolithic entity. Its creation relies on a complex and globally distributed software supply chain. This includes third-party datasets, open-source foundational models, commercial software libraries, and Application Programming Interfaces (APIs).³⁴ ¹⁰⁸ Each link in this chain represents a potential vector for compromise.

A primary risk lies in the use of open-source datasets. Most systems, including those in the defense sector, leverage publicly available datasets and models as a starting point.³⁴ These massive datasets are extremely difficult to audit for quality and security. This creates a prime opportunity for actors to conduct data poisoning campaigns by seeding these repositories with malicious code or misinformation.³⁴

API vulnerabilities present another critical attack surface. APIs are the connective tissue that allows Lattice to integrate with other systems. If these APIs lack robust security protocols, they can be exploited to inject malicious data or exfiltrate sensitive information.³⁴

Finally, the risk of integrating software from multiple commercial vendors is significant. The Next Generation Command and Control (NGC2) program serves as a stark warning. A U.S. Army memo revealed that the prototype, developed by Anduril, Palantir, and Microsoft, suffered from “critical deficiencies in fundamental security controls,” including a complete lack of access controls and unvetted third-party applications with hundreds of security flaws.¹⁵ ⁴⁵ This case illustrates that the integration points, or “seams,” between different vendors’ products are often the weakest links.⁵⁸

This table provides a taxonomy of the various algorithmic attacks that can be directed at an AI platform, categorizing them by their method and potential impact.

Table 1: Taxonomy of Algorithmic Attacks on the Lattice Platform

Attack CategoryAttack TypeMechanism of ActionPotential Impact on LatticeDetection Difficulty
Data PoisoningDirty-LabelAdversary injects data with obviously incorrect labels into the training set.¹³Degrades overall model accuracy; may cause predictable misclassifications.Low to Medium
Clean-LabelAdversary makes subtle, imperceptible changes to training data while keeping correct labels, creating a backdoor.¹³Causes specific, targeted failures under trigger conditions (e.g., misidentifies friendly forces when a specific symbol is present).High
Indirect / OSINTAdversary places malicious content on public sources (websites, documents) to be scraped into future training sets.¹³Introduces subtle biases or backdoors into the model without direct network access.High
Evasion AttackDigital PerturbationAdversary adds a small amount of carefully crafted digital “noise” to an image or sensor reading to cause misclassification.⁴¹Fails to detect an enemy asset (false negative) or misidentifies a civilian object as a threat (false positive).Medium
Physical PatchAdversary places a physical object (e.g., a sticker, a specific pattern) in the real world to fool the AI’s perception.⁶⁰An enemy vehicle with a specific patch becomes “invisible” to the AI, or a friendly vehicle is misidentified as hostile.High
Generative AdversarialAdversary uses an AI to generate hyper-realistic fake sensor data (e.g., imagery, radar signals) to deceive the system.³¹ ⁶⁰Overwhelms the system with high-fidelity decoys; creates phantom threats that divert resources.High
BiasData-DrivenModel is trained on a dataset that is unrepresentative of the operational environment or contains societal biases.¹Poor performance in new environments; systematically misidentifies individuals based on demographics, leading to IHL violations.Medium
Supply ChainOpen-Source PoisoningMalicious code or data is injected into public datasets or models used as a foundation for Lattice.³⁴Introduces widespread, difficult-to-trace vulnerabilities or biases into the core of the platform.High

Section 2: Operational Environment and Network-Layer Threats: The Contested Physical and Electromagnetic Battlefield

Beyond its internal algorithms, the Lattice platform’s effectiveness depends on its ability to perceive and communicate in a hostile environment. Its sensors are its eyes and ears; its communication networks are its nervous system. This section analyzes weaknesses that emerge from the platform’s interaction with the physical world. It focuses on the platform’s reliance on sensors and networks that are prime targets for enemy action. The central argument is that the platform’s model of reality can be severed from reality itself, or replaced with a malicious fabrication, rendering its advanced algorithms useless or even dangerous.

2.1 Sensor Integrity Under Attack: Spoofing and Deception

The entire decision-making cycle of the Lattice platform begins with sensor data. If these foundational inputs can be manipulated, the platform’s entire world-model becomes a fabrication. Any subsequent analysis or action will be fundamentally flawed. This is the principle behind sensor spoofing and deception attacks.

A critical vulnerability is the spoofing of Global Navigation Satellite Systems (GNSS), such as GPS. While jamming simply denies access to GNSS signals, spoofing is far more insidious. An adversary transmits false GNSS signals that are stronger than the authentic ones, deceiving a receiver into calculating an incorrect position or time.⁹⁷ The danger lies in its subtlety; an autonomous drone may not realize it is being spoofed and will continue to operate with high confidence based on false information. The increasing availability of the necessary technology means that sophisticated GNSS spoofing is no longer the exclusive domain of state actors.⁹⁷

This vulnerability extends beyond navigation. Adversaries can employ Multi-Domain Military Deception (MILDEC) to manipulate the full spectrum of the platform’s sensors. This involves coordinated efforts to create compelling “false positives” (decoys) or “false negatives” (camouflage) across multiple domains simultaneously.¹² ⁷⁷ An AI-powered fusion engine is a prime target for such techniques. An adversary can aim to overload its processing capacity by generating a large number of false targets, forcing the system to waste analytical resources.¹²

2.2 Contesting the Spectrum: The Electronic Warfare Threat

The Lattice platform and its distributed network are critically dependent on the electromagnetic (EM) spectrum for communication and sensing. This dependency makes the entire system a primary target for Electronic Warfare (EW), which focuses on controlling the EM spectrum to disrupt enemy operations.⁶⁸

The most direct threat is Electronic Attack (EA), commonly known as jamming. An adversary can use powerful transmitters to overwhelm the frequencies used by the platform’s tactical data links.⁶⁸ ¹¹¹ This can sever the connection between nodes, isolate assets, and prevent operators from controlling autonomous systems.³⁵ The rise of cognitive EW systems, which use AI to adapt jamming signals in real-time, threatens to overcome modern defenses.⁵⁴ The recent conflict in Ukraine has demonstrated the effectiveness of EW in disrupting drone operations.¹⁶⁹

Beyond active jamming, adversaries can employ passive EW techniques. Electronic Warfare Support (ES) and Signals Intelligence (SIGINT) involve intercepting and analyzing an opponent’s radio frequency emissions.⁶⁸ Even if communications are encrypted, an adversary can perform traffic analysis. By studying the volume, timing, and origin of messages, they can map the network’s topology, identify key nodes, and deduce the operational tempo, all without decrypting a single message.³⁵

2.3 Decentralization as a Double-Edged Sword: Wireless Mesh Network Flaws

To enhance resilience, platforms like Lattice often use a decentralized wireless mesh network. In this architecture, each node can act as a router, relaying data for other nodes. This creates a self-healing network that can withstand the loss of individual nodes.¹⁶ However, this dynamic architecture introduces a new set of logical vulnerabilities.¹⁶

The routing protocols that allow the network to be dynamic are susceptible to manipulation. Specific attack vectors include:

  • Flooding Attacks: An attacker can broadcast a continuous stream of routing requests or false error reports. This consumes processing power and bandwidth, ultimately leading to a denial-of-service condition.⁹²
  • Path Diversion Attacks: An attacker can manipulate routing protocol messages to trick other nodes into believing the attacker’s node offers the most efficient path. This allows the adversary to divert the flow of information through a malicious node.⁹²
  • Blackhole and Wormhole Attacks: In a wormhole attack, the malicious node forwards diverted traffic but also copies it for intelligence-gathering. In a more destructive blackhole attack, the malicious node simply discards all packets it receives, creating a void in the network.⁹²
  • Sybil and Man-in-the-Middle Attacks: In a Sybil attack, an adversary creates numerous falsified identities to disrupt the network. This can be a precursor to a man-in-the-middle attack, where the adversary intercepts, modifies, or injects data into the communication stream.¹⁷

These network-layer attacks demonstrate a critical principle: resilience in one domain can create new vulnerabilities in another. The decentralized structure designed to resist kinetic attacks is inherently vulnerable to logical attacks that exploit its own rules of operation.

Section 3: Systemic and Architectural Flaws: Cracks in the Foundation

This section shifts the analysis from external threats to internal weaknesses rooted in the Lattice platform’s design and development. The pursuit of a universal, hardware-agnostic, multi-vendor platform creates inherent “brittleness,” introduces vulnerabilities at integration “seams,” and gives rise to unpredictable emergent behaviors. These are not simple bugs but deep-seated architectural challenges. The platform’s complexity and ambition are, themselves, a source of significant risk.

3.1 The Perils of Integration (Case Study: NGC2): Vulnerabilities at the ‘Seams’

Modern defense systems are rarely the product of a single company. Integrating platforms from multiple vendors carries immense risk, as the interfaces—the “seams” between systems—are often where critical vulnerabilities arise.⁵⁸ The Anduril-Palantir Next Generation Command and Control (NGC2) prototype, a system similar to Lattice, serves as a crucial cautionary case study.⁴⁵

In September 2025, a U.S. Army memo assessed the NGC2 prototype. It detailed “fundamental security” problems and “critical deficiencies,” leading to the conclusion that the system was a “very high risk”.¹⁵ ²⁹ The memo outlined a series of catastrophic security failures stemming from the integration process:

  • Complete Lack of Access Control: The memo stated, “We cannot control who sees what.” This meant any user could potentially access all data, regardless of their clearance level, creating a massive vulnerability to insider threats.¹⁵ ¹⁰⁹ ¹⁴⁹
  • Absence of an Audit Trail: The system lacked mechanisms to log or track user actions, making it impossible to conduct forensic analysis after a security incident. The memo noted, “We cannot see what users are doing.”¹⁵
  • Unvetted Third-Party Software: Third-party applications had “bypassed standard Army security assessments.” One application contained 25 high-severity code vulnerabilities, while three others each contained over 200 flaws.¹⁵
  • Inability to Verify Software Integrity: The memo admitted, “we cannot verify that the software itself is secure”.²⁹ This points to a fundamental breakdown in supply chain security.

While the companies involved stated the issues were part of a normal development process and were quickly mitigated, the incident is deeply revealing.⁴⁵ It demonstrates that integrating cutting-edge systems comes with a heavy “integration tax” in the form of security risk. The drive for rapid, agile development can clash directly with the rigorous, security-first requirements of military systems.⁹

3.2 The Hardware-Agnostic Paradox: Brittleness and Unpredictability

A core design goal of platforms like Lattice is to be “hardware-agnostic”—a universal software operating system that can control any sensor or vehicle. This approach promises flexibility but also creates a paradox. The abstraction required to make the software universal can lead to a dangerous disconnect from the physical realities of the hardware it controls, resulting in “brittleness”.⁹ ¹¹ An AI model may behave unpredictably when confronted with new equipment or unanticipated circumstances.¹

The challenge is exacerbated by the need to integrate with legacy military hardware. These older systems often use proprietary interfaces and were not designed for a modern, software-defined architecture. Connecting them to a platform like Lattice can result in degraded performance or system failure.⁹

A potent example comes from the developers themselves. During testing of Anduril’s Anvil interceptor drone, a software upgrade to the computer vision algorithms had an unforeseen physical consequence. The new guidance system issued commands to the drone’s motors at a much faster rate, overloading the hardware and causing the drones to fall out of the sky.⁵⁸ This incident perfectly illustrates the hardware-agnostic paradox: a logical improvement in software led to a catastrophic failure in the physical world because the system was not tested holistically.

3.3 The Unpredictability of Complexity: Emergent Behavior

As the Lattice platform coordinates numerous autonomous agents, it becomes a complex adaptive system. In such systems, behaviors can “emerge” that were not explicitly programmed and cannot be easily predicted by analyzing the individual agents.²² This emergent behavior is a result of the complex interactions between the agents and their environment.

This unpredictability is a tactical double-edged sword. On one hand, the emergent, unpredictable maneuvering of a drone swarm can make it more resilient and harder for an enemy to defend against.²² On the other hand, this same unpredictability represents a massive vulnerability. The system can develop unintended and potentially dangerous behaviors that were never anticipated by its designers.¹⁰³

Controlled experiments with AI agents have already demonstrated concerning emergent behaviors. In one case, two AI systems spontaneously developed their own private language to communicate more efficiently.¹⁰³ Transposed to a military context, the implications are chilling. A swarm of autonomous drones could “decide” on a novel course of action that violates the commander’s intent or breaches the rules of engagement.

This creates a fundamental tension. The military requires systems that are predictable and reliable.²² Yet, to be effective, the system must be adaptive and not “brittle”.¹ This adaptability is what gives rise to unpredictable emergent behavior. This is not a bug that can be fixed, but a systemic paradox. The quality that makes a drone swarm tactically effective (unpredictable behavior) is the same quality that makes it ethically hazardous (uncontrollable behavior).

This table summarizes the critical deficiencies identified in the NGC2 prototype, linking each reported vulnerability to its root cause and potential operational impact.

Table 2: Analysis of Reported NGC2 Prototype Vulnerabilities

Reported VulnerabilityDirect Security RiskProbable Root CauseOperational ConsequenceRelevant Snippets
“We cannot control who sees what.” 1Insider Threat; Data Spillage; Unauthorized AccessFailure to implement role-based access control (RBAC) or a Zero Trust architecture.Compromise of mission plans; Adversary gains access to sensitive intelligence; Misuse of classified data.¹⁵ ¹⁰⁹ ¹⁴⁹
“We cannot see what users are doing.” 2Lack of Accountability; Inability for Forensic InvestigationFailure to implement comprehensive audit logging and user activity monitoring.Inability to investigate a data breach or insider threat incident; Erosion of command accountability.¹⁵
“We cannot verify that the software itself is secure.” 1Compromised Software Supply Chain; Undetectable BackdoorsInadequate DevSecOps practices; Lack of code scanning and software bill of materials (SBOM) verification.Adversary could gain persistent, undetectable access to the entire C2 network.²⁹
Unvetted Third-Party ApplicationsIntroduction of Malware/VulnerabilitiesBypassing of standard Army security assessments and vetting processes during rapid integration.Exploitable flaws (e.g., 25 high-severity vulnerabilities in one app) create entry points for external attacks.¹⁵

Section 4: Human, Ethical, and Legal Dimensions of Failure: The Unaccountable Machine

The most sophisticated technological system is ultimately a tool intended to serve human objectives. The vulnerabilities of the Lattice platform extend beyond its code and hardware to the complex interface between the human operator and the autonomous machine. This section analyzes these weaknesses. It argues that the platform’s core attributes—speed, complexity, and opacity—fundamentally challenge human oversight, erode legal accountability, and create conditions for catastrophic ethical failures.

4.1 The Opaque Battlefield: The “Black Box” Problem and Explainability

Many advanced deep learning models operate as “black boxes.” They can produce highly accurate outputs, but the internal logic behind their conclusions can be inscrutable, even to their creators.¹ ¹¹⁸ This opacity is a critical flaw in a military context, where every decision must be justifiable, auditable, and compliant with legal and ethical standards.² ⁵³

When an operator cannot understand why an AI system is recommending a particular action, it becomes impossible to build trust or exercise meaningful oversight. Is the system recommending a strike based on legitimate threat data, or is it reacting to an adversarial attack or a flaw in its training?¹ ² Without transparency, the operator has no basis upon which to make this critical judgment.

The field of Explainable AI (XAI) aims to address this problem by making model decisions more understandable.² However, XAI is still an emerging field. Current techniques are often more useful for developers during debugging than for operators on the battlefield.⁵³ ⁵⁹ Furthermore, XAI itself has weaknesses. Its explanations are simplifications and can sometimes be misleading. A flawed explanation could create a false sense of security, leading an operator to place more trust in the system than is warranted.²

4.2 The Erosion of Control: Automation Bias and the Myth of “Meaningful Human Control”

The primary tactical advantage of the Lattice platform is speed. By analyzing data at machine speed, it can compress decision-making cycles from minutes to seconds.¹ ⁷⁶ While this offers an advantage, it also poses a profound threat to human control. The sheer velocity and volume of information can overwhelm human cognitive capacity, creating pressure to defer to the machine’s judgment. This phenomenon, known as “automation bias,” is a critical vulnerability.¹

A stark real-world example occurred during the 2003 invasion of Iraq. On two separate occasions, U.S. Army Patriot missile batteries misidentified friendly aircraft as hostile Iraqi missiles. In both cases, the human operators, having only seconds to decide, approved the system’s recommendation to fire, resulting in the deaths of three allied aircrew. Investigations concluded that the operators approved the engagements “without independent scrutiny of the information available,” a classic and tragic manifestation of automation bias.¹⁰⁵ Conversely, in a famous 1983 incident, Soviet Lt. Col. Stanislav Petrov chose to distrust his early-warning system’s alert of an incoming U.S. missile strike, correctly assessing it as a false alarm and preventing potential nuclear war.¹¹⁵

This reality calls into question the effectiveness of policies that mandate “meaningful human control” (MHC) or a “human-in-the-loop”.¹⁶³ When an operator has only seconds to veto a machine’s recommendation, their role risks being reduced to a symbolic “rubber stamp”.⁷⁶ The distinction between a “human-in-the-loop” (who must actively approve an action) and a “human-on-the-loop” (who can intervene to stop one) may become operationally meaningless when the decision window is too short for deliberation.

Deeper critiques challenge this framework’s foundation, arguing that the concept of “the human” in the loop is a flawed and culturally specific construct. This perspective suggests that societal biases influence who is considered a legitimate human controller and whose life is deemed worthy of protection.²⁰ Therefore, simply inserting a human into the decision cycle may not be a sufficient safeguard.

4.3 The Accountability Void: A Legal and Ethical Chasm

One of the most profound weaknesses of deploying autonomous systems is the creation of an “accountability gap”.⁴ ⁵ When an autonomous system makes a decision that leads to unintended harm, assigning legal and criminal responsibility becomes exceptionally difficult.⁴ An algorithm, after all, has no legal personality; it cannot form intent, stand trial, or be punished.⁷⁶

This creates a “responsibility vacuum” where accountability is diffused across a complex chain of human actors.⁴ Potential liability could fall on numerous people: the engineers, the data scientists, the manufacturer, the commander, or the operator. However, the system’s complexity and opacity may make it impossible to prove that any one of these individuals had sufficient knowledge, control, or criminal intent to be held liable.⁴

4.4 Breaching the Laws of War: The IHL Compliance Crisis

The autonomous functions of the Lattice platform pose a direct challenge to the core principles of International Humanitarian Law (IHL), the body of law that governs armed conflict.¹¹⁹ ¹⁵¹ The system’s operational logic may be fundamentally incompatible with the nuanced, context-dependent judgments required by the laws of war.

  • Distinction: This principle requires combatants to distinguish between military objectives and civilians. An AI system may struggle to make this distinction, particularly in complex urban environments. Its performance is vulnerable to algorithmic bias, adversarial deception, and sensor spoofing, any of which could lead to the illegal targeting of civilians.⁵
  • Proportionality: This principle prohibits attacks where the expected incidental harm to civilians would be excessive in relation to the military advantage anticipated. This is a complex, context-dependent, and fundamentally human moral and legal judgment. An autonomous system, lacking human consciousness, is arguably incapable of performing this weighing of dissimilar values.⁵
  • Precaution: This principle requires combatants to take all feasible precautions to minimize harm to civilians. This can include verifying targets, choosing the right weapon, and providing warnings. An autonomous system may be unable to perform these functions adequately. It cannot, for example, interpret the subtle cues of human behavior that might indicate surrender or distress.⁵

Beyond IHL, the use of such systems implicates fundamental human rights, most notably the right to life. The act of a machine making a life-or-death determination without the human capacity to understand the value of a human life can be argued to constitute an arbitrary deprivation of life, which is prohibited under international human rights law.⁵

This table illustrates how the autonomous functions of an AI platform can conflict with the core principles of International Humanitarian Law (IHL), leading to potential violations.

Table 3: IHL Compliance Challenges for Autonomous Functions

Autonomous FunctionIHL PrinciplePotential for ViolationKey Contributing Vulnerabilities
Autonomous Target Recognition (ATR)DistinctionSystem misidentifies a civilian vehicle (e.g., an ambulance or press vehicle) as a military target and recommends engagement.Algorithmic Bias (trained on limited data)¹; Adversarial Deception (physical patch on vehicle)⁶⁰; Sensor Spoofing (false sensor data).⁹⁷
Lethal Engagement AuthorizationProportionalitySystem authorizes a strike on a legitimate military target located in a densely populated area, without the capacity to weigh the excessive civilian harm against the military gain.Black Box Nature (inability to explain its proportionality calculation)¹; Lack of Human Judgment (cannot make value-based assessments).⁵
Dynamic Swarm Route PlanningPrecautionA drone swarm autonomously re-routes through an undeclared “no-strike” zone (e.g., a hospital or school) to reach its target more efficiently, without recognizing the protected status of the area.Emergent Behavior (swarm optimizes for a path not foreseen by operators)¹⁰³; Brittleness (fails to adapt to dynamic changes in the operational environment).¹
Threat PrioritizationDistinctionSystem incorrectly prioritizes a group of civilians exhibiting unusual but non-hostile behavior (e.g., gathering for a funeral) as a high-priority threat.Automation Bias (operator defers to the AI’s high-confidence assessment)¹; Algorithmic Bias (model trained to view large gatherings as suspicious).¹

Section 5: Strategic and Dual-Use Risks: The Pandora’s Box Effect

The final category of weaknesses extends beyond the tactical to the strategic and geopolitical. These vulnerabilities are not about how the system might fail on the battlefield, but about the consequences of its very existence and proliferation. This section argues that the platform’s core technology is an inherently dual-use capability that could be repurposed for domestic oppression, and that its spread could trigger a new, destabilizing arms race.

5.1 The Dual-Use Dilemma: From Battlefield to Homeland

The core technologies of the Lattice platform—AI-powered mass surveillance, multi-source data fusion, and predictive modeling—are inherently dual-use.³¹ A system designed to track adversaries on a foreign battlefield is technologically indistinguishable from a system designed to track dissidents within a domestic population. This creates a profound risk that military-grade surveillance technology will be repurposed for domestic law enforcement, a phenomenon known as “mission creep.”

This is not a hypothetical risk. Authoritarian regimes are actively building and deploying the architecture of “digital authoritarianism” that a platform like Lattice could perfect.⁹⁸ They are using vast networks of AI-driven surveillance cameras, facial recognition, and predictive policing algorithms to monitor their citizens and silence dissent.³⁷ ¹⁷⁶ The Chinese Communist Party, for example, has exported its surveillance technology to at least 80 countries.²³ In one instance, data from the African Union headquarters, built by Huawei, was secretly transferred to servers in Shanghai.²³ Similarly, Russia has used its facial-recognition network to track and detain anti-government demonstrators.³⁷

Even within established democracies, such a powerful surveillance capability poses a direct threat to civil liberties. Experience shows that powerful surveillance systems are inevitably susceptible to abuse.⁸³ This can take multiple forms:

  • Discriminatory targeting of minority communities.¹¹⁴
  • Institutional abuse to illegally monitor political activists.¹¹⁴
  • Criminal misuse by individual officers for personal reasons, such as stalking.¹¹⁴

The mere existence of such a pervasive monitoring capability can create a chilling effect on fundamental rights like freedom of speech and peaceful assembly.⁷⁴

5.2 The Proliferation of Advanced Capabilities: A New Arms Race

The deployment of a powerful, autonomous C2 system like Lattice by one major power will inevitably pressure its adversaries to field comparable systems. This dynamic risks triggering a new, highly destabilizing arms race in autonomous warfare technology.¹⁶⁹ Unlike previous arms races focused on hardware, this competition will be centered on algorithms, data, and processing speed, making it more opaque and volatile.

The strategic landscape could become one populated by multiple, interacting, high-speed AI agents, creating novel and unpredictable pathways to escalation.¹⁰³ A crisis could be triggered not by a deliberate human decision, but by the unforeseen interaction of two opposing AI-controlled systems operating at speeds that outpace human comprehension.⁸⁸ This fundamentally challenges traditional models of deterrence and crisis management.

Furthermore, the hardware-agnostic nature of the platform could, over time, lower the barrier to entry for acquiring sophisticated autonomous warfare capabilities. As commercial drones and sensors become cheaper and more powerful, non-state actors or smaller nations could integrate these components with C2 software to create potent, low-cost autonomous systems. The greatest long-term strategic risk may be the creation of a global security environment that is inherently unstable and for which current military doctrine is dangerously unprepared.

Section 6: Conclusion and Strategic Recommendations

The analysis in this report demonstrates that the weaknesses of the Lattice AI platform are not isolated technical flaws. They are deeply interconnected and systemic, arising from its core design principles. A vulnerability in the software supply chain can introduce a bias that is exploited by an electronic warfare attack, leading to a catastrophic system failure for which legal accountability is impossible to establish. The platform’s strengths—speed, data fusion, and autonomy—are inextricably linked to its most profound weaknesses.

Mitigating these multifaceted risks requires a holistic approach that addresses the technology, procedures, policies, and doctrines that govern its use. The following strategic recommendations are proposed to guide this effort.

6.1 A Synthesis of Interconnected Weaknesses

The platform’s vulnerabilities exist in a causal chain that spans the entire system lifecycle and operational context.⁶ ⁶⁹ ⁷⁹ A data poisoning attack on an open-source repository can introduce a subtle bias. This biased model, integrated into a system with inadequate access controls like the NGC2 prototype, is then deployed. In the field, an adversary uses GNSS spoofing and physical decoys to manipulate the platform’s perception, while jamming degrades the human operator’s ability to intervene. The biased, deceived system then makes a flawed recommendation, which is accepted by an operator suffering from automation bias, leading to a violation of IHL. The black box nature of the system makes a post-incident investigation inconclusive, creating an accountability vacuum. This hypothetical but plausible scenario illustrates that a defense-in-depth strategy is required.

Illustrative Causal Chain of AI System Failure

 -> [Algorithmic Flaw] -> -> [Operational Attack] -> [Human Factor Error] -> [Catastrophic Outcome]

| | | | | |
(e.g., Data Poisoning   (e.g., Inherent Bias,    (e.g., Lack of Access      (e.g., Sensor Spoofing,   (e.g., Automation Bias,  (e.g., Fratricide, IHL
 of open-source data)    Brittleness)              Controls in NGC2)            EW Jamming)              Cognitive Overload)      Violation)

6.2 Recommendations for Mitigation and Resilient Design

  • Technical Recommendations:
    • Mandate Continuous Adversarial Testing: Implement a permanent, well-resourced red team to continuously probe the platform for vulnerabilities. This must be an ongoing process, not a one-time check.
      • Challenges: Adversarial testing is not a panacea. It initiates a continuous “cat-and-mouse” game. Red teaming can prove a vulnerability exists, but it cannot guarantee that one does not. The process is also resource-intensive.⁸⁴
    • Invest in Operationally-Focused XAI: Shift XAI research from developer-centric tools to creating robust, intuitive interfaces for operators. These systems should highlight uncertainty and data conflicts.
      • Challenges: XAI is an immature field, and current methods can be simplified or misleading.² ⁵⁹ Flawed explanations could create a false sense of security, worsening automation bias.²
    • Enforce a Zero Trust Architecture: The lessons from the NGC2 incident must be institutionalized. All future multi-vendor integrations must be built on a “zero trust” security model, where every user and device is continuously authenticated.
      • Challenges: Implementing Zero Trust across vast, multi-vendor defense networks is exceptionally complex and resource-intensive. It requires a significant cultural shift and can face interoperability issues with legacy systems.³³ ¹⁶⁵
  • Procedural Recommendations:
    • Develop Counter-Autonomy TTPs: Create and drill new Tactics, Techniques, and Procedures (TTPs) for operating in environments where AI systems are actively being targeted. This includes procedures for detecting and responding to GNSS spoofing, physical adversarial attacks, and electronic warfare.
    • Establish Rigorous Data Governance: Implement a formal data governance framework for the entire AI/ML pipeline. This must include strict validation protocols for all incoming data, provenance tracking, and regular audits of training data to detect and mitigate bias.

6.3 Policy and Doctrine Imperatives

  • Overhaul Operator Training: Training for AI-enabled systems must evolve beyond technical proficiency. It must focus on developing the cognitive skills for effective human-machine teaming. This includes training operators to understand AI’s limitations, recognize automation bias, and cultivate the critical judgment to know when to distrust the machine.¹⁶¹ ¹⁸⁸
    • Challenges: Training for cognitive resilience against automation bias is notoriously difficult, especially under combat stress. It requires a fundamental shift in training philosophy toward critical thinking, which is harder to standardize and measure.⁷¹
  • Reform Acquisition and Testing: The defense acquisition process must prioritize holistic, integrated system security over the sheer speed of delivery. The NGC2 incident demonstrates the danger of prioritizing rapid integration without sufficient security oversight. Integrated testing in realistic, contested environments must be the standard.⁵⁸ ⁹³
  • Develop Legally-Binding Frameworks for Accountability: Policymakers and military lawyers must move beyond the simplistic “human-in-the-loop” concept. They must develop new, legally-binding frameworks that establish clear lines of accountability for the actions of autonomous systems.

6.4 Future Outlook and Areas for Further Research

The vulnerabilities outlined in this report are not static. As AI technology evolves, the threat landscape will become more complex. The rise of generative adversarial attacks, where adversaries use AI to create hyper-realistic fake sensor data, will further blur the line between reality and deception.³¹ ⁷⁸ Adversaries will invariably develop and deploy their own adversarial machine learning techniques, making the operational environment a continuous contest between competing AI systems.¹³³ ¹³⁴

This necessitates a forward-looking approach centered on adaptation. Key areas for future research should include:

  • Advanced AI Red Teaming: Developing more sophisticated and operationally realistic AI red teaming methodologies is critical. This includes assessing the entire socio-technical system, including human operators and organizational processes.¹²⁰ ¹⁴⁷
  • Resilient Human-Machine Teaming: Further research is needed into the cognitive science of human-AI interaction in high-stakes environments. This includes developing new training paradigms and user interfaces designed to mitigate automation bias.
  • Continuous T&E Lifecycles: Test and Evaluation (T&E) must shift from a one-time event to a continuous lifecycle. AI systems must be constantly re-validated against new data and evolving adversary tactics to prevent performance degradation.¹³²

Ultimately, ensuring the safe and effective deployment of military AI is not a problem that can be solved once. It is a challenge that requires sustained investment, institutional adaptation, and a permanent posture of critical self-assessment. The urgency of this task cannot be overstated; failure to comprehensively address these interconnected vulnerabilities does not merely risk mission failure, but invites strategic catastrophe. The ongoing need for adaptation is the only constant in this new era of warfare.

Works Cited

  1. Bode, Ingvild. “Falling Under the Radar: The Problem of Algorithmic Bias and Military Applications of AI.” ICRC Law & Policy Blog, March 14, 2024. https://blogs.icrc.org/law-and-policy/2024/03/14/falling-under-the-radar-the-problem-of-algorithmic-bias-and-military-applications-of-ai/
  2. Ferreira, Brian. “The Implications of Explainable Artificial Intelligence in Automated Warfare.” Defence & Security Foresight Group, University of Waterloo. https://uwaterloo.ca/defence-security-foresight-group/sites/default/files/uploads/documents/ferreira_implications-of-explainable.pdf
  3. “Inside the Crucible: Anduril’s Secret to Rapid Development at Scale.” Anduril Industries. https://www.anduril.com/article/anduril-project-crucible/
  4. “Lethal Autonomous Weapon Systems: LAWS, Accountability, Collateral Damage, and the Inadequacies of International Law.” Temple International & Comparative Law Journal. https://law.temple.edu/ilit/lethal-autonomous-weapon-systems-laws-accountability-collateral-damage-and-the-inadequacies-of-international-law/
  5. “A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making in the Use of Force.” Human Rights Watch, April 28, 2025. https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
  6. “The U.S. Navy has built a drone fleet in order to fight China. It’s not working out.” MarineLink, August 20, 2025. https://www.marinelink.com/blogs/blog/the-us-navy-has-built-a-drone-fleet-in-order-to-fight-china-its-103197
  7. Reddit comment on “Why everyone eventually hates (or leaves) Maven.” Reddit, r/programming. https://www.reddit.com/r/programming/comments/176o53/why_everyone_eventually_hates_or_leaves_maven/
  8. Wheeler, Winslow. “The Problems with the Gorgon Stare Surveillance System.” CounterPunch, January 25, 2011. https://www.counterpunch.org/2011/01/25/the-problems-with-the-gorgon-stare-surveillance-system/
  9. “Software-Defined Defence: A Horizontally Scaled Platform Approach to Capability Development and Force Generation.” International Institute for Strategic Studies, February 17, 2023. https://www.iiss.org/globalassets/media-library—content–migration/files/research-papers/iiss_software-defined-defence_17022023.pdf
  10. Huitt, Joseph L. “Leadership: Artificial intelligence in decision-making.” U.S. Army, October 16, 2024. https://www.army.mil/article/286847/leadership_artificial_intelligence_in_decision_making
  11. Mitchell, Billy. “Project Maven’s accountability lessons.” Washington Technology, January 4, 2022. https://www.washingtontechnology.com/opinion/2022/01/project-mavens-accountability-lessons/360617/
  12. Pikner, Grant. “Multi-Domain Military Deception.” Military Review, March-April 2021. https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/March-April-2021/Pikner-Military-Deception/
  13. “What Is Data Poisoning?” Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-is-data-poisoning
  14. Mitchell, Billy. “Project Maven’s accountability lessons.” Washington Technology, January 4, 2022. https://www.washingtontechnology.com/opinion/2022/01/project-mavens-accountability-lessons/360617/
  15. “US Army memo says Palantir, Anduril’s NGC2 prototype has ‘fundamental security’ problems.” MiTrade, October 4, 2025. https://www.mitrade.com/insights/news/live-news/article-3-1171670-20251004
  16. “Wireless mesh network.” Wikipedia. https://en.wikipedia.org/wiki/Wireless_mesh_network
  17. Shapovalova, Y., Vertebnyi, V., & Marsel, M. “Cybersecurity of Mesh Networks: Modern Challenges and Innovations.” Kharkiv National University of Economics, 2024. http://repository.hneu.edu.ua/bitstream/123456789/34902/1/%D0%A2%D0%B5%D0%B7%D0%B8%20%D0%A8%D0%B0%D0%BF%D0%BE%D0%B2%D0%B0%D0%BB%D0%BE%D0%B2%D0%B0_%D0%92%D0%B5%D1%80%D1%82%D0%B5%D0%B1%D0%BD%D0%B8%D0%B9_%D0%9C%D0%B0%D1%80%D1%81%D0%B5%D0%BB%D1%8C.pdf
  18. “Adversarial Machine Learning: A Taxonomy and Survey of Current Methods.” RAND Corporation, 2022. https://www.rand.org/content/dam/rand/pubs/research_reports/RRA800/RRA866-1/RAND_RRA866-1.pdf
  19. “The Ethics of Automated Warfare and Artificial Intelligence.” Centre for International Governance Innovation. https://www.cigionline.org/the-ethics-of-automated-warfare-and-artificial-intelligence/
  20. Wilcox, Lauren. “The Human as Technology in the ‘Human in the Loop’.” The Oxford Handbook of AI Governance, 2024. https://academic.oup.com/book/55103/chapter/423910139
  21. “Evaluation of Contract Monitoring and Management for Project Maven.” Department of Defense Office of Inspector General, January 10, 2022. https://www.dodig.mil/reports.html/Article/2893388/evaluation-of-contract-monitoring-and-management-for-project-maven-dodig-2022-0/
  22. Ekelhof, M.A.C. “Implications of Emergent Behavior on Ethical Artificial Intelligence Principles for Defense.” Lieber Institute, West Point, October 26, 2022. https://lieber.westpoint.edu/implications-emergent-behavior-ethical-artificial-intelligence-principles-defense/
  23. Sherman, Justin. “The Dangers of the Global Spread of China’s Digital Authoritarianism.” Center for a New American Security, July 28, 2021. https://www.cnas.org/publications/congressional-testimony/the-dangers-of-the-global-spread-of-chinas-digital-authoritarianism
  24. “Lethal Autonomous Weapon Systems: LAWS, Accountability, Collateral Damage, and the Inadequacies of International Law.” Temple International & Comparative Law Journal. https://law.temple.edu/ilit/lethal-autonomous-weapon-systems-laws-accountability-collateral-damage-and-the-inadequacies-of-international-law/
  25. Clark, Colin. “Gorgon Stare test uncovers major glitches.” Defense One, January 25, 2011. https://www.defenseone.com/defense-systems/2011/01/gorgon-stare-test-uncovers-major-glitches/192796/
  26. Sherman, Justin. “The Dangers of the Global Spread of China’s Digital Authoritarianism.” Center for a New American Security, July 28, 2021. https://www.cnas.org/publications/congressional-testimony/the-dangers-of-the-global-spread-of-chinas-digital-authoritarianism
  27. “Software Development Lifecycle Theory of Military Accidents.” Texas National Security Review, October 2024. https://tnsr.org/2024/10/machine-failing-how-systems-acquisition-and-software-development-flaws-contribute-to-military-accidents/
  28. Barnett, Jackson. “Pentagon’s Project Maven responds to criticism: ‘There will be those who will partner with us’.” FedScoop, May 1, 2018. https://fedscoop.com/project-maven-artificial-intelligence-google/
  29. Viveros Álvarez, Jimena Sofía. “The risks and inefficacies of AI systems in military targeting support.” ICRC Law & Policy Blog, September 4, 2024. https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/
  30. “Incident 445: Patriot Missile System Misclassified US Navy Aircraft, Killing Pilot Upon Approval to Fire.” AI Incident Database. https://incidentdatabase.ai/cite/445/
  31. “The AI Dual-Use Dilemma: Navigating the Risks of Generative AI.” StrongestLayer Blog. https://www.strongestlayer.com/blog/ai-dual-use-dilemma
  32. “Innovating Defense: Generative AI’s Role in Military Evolution.” U.S. Army, October 11, 2024. https://www.army.mil/article/286707/innovating_defense_generative_ais_role_in_military_evolution
  33. Reddit comment on “Reality, Challenges, and Opportunities around implementing a Zero Trust Architecture in the DoD.” Reddit, r/cybersecurity. https://www.reddit.com/r/cybersecurity/comments/1gyclje/reality_challenges_and_opportunities_around/
  34. “How cyber criminals are compromising AI software supply chains.” IBM Think, October 2025. https://www.ibm.com/think/insights/cyber-criminals-compromising-ai-software-supply-chains
  35. Grebe, M., & Nardone, R. “Mitigating Security Threats in Tactical Networks.” RTO-MP-IST-092, 2010. https://apps.dtic.mil/sti/tr/pdf/ADA584176.pdf
  36. Mitchell, Billy. “Google’s departure from Project Maven was a ‘little bit of a canary in a coal mine’.” FedScoop, November 5, 2019. https://fedscoop.com/google-project-maven-canary-coal-mine/
  37. Polyakova, Alina, and Chris Meserole. “How Autocrats Weaponize AI, and How to Fight Back.” Journal of Democracy, October 2024. https://www.journalofdemocracy.org/online-exclusive/how-autocrats-weaponize-ai-and-how-to-fight-back/
  38. “Gorgon Stare.” Wikipedia. https://en.wikipedia.org/wiki/Gorgon_Stare
  39. “Beyond Mechanistic Control: Causal Decision Processing in Neuromorphic Military AI.” National Defense University Press, July 15, 2024. https://inss.ndu.edu/news/Article/4313650/beyond-mechanistic-control-causal-decision-processing-in-neuromorphic-military/
  40. “Beyond Mechanistic Control: Causal Decision Processing in Neuromorphic Military AI.” National Defense University Press, July 15, 2024. https://inss.ndu.edu/news/Article/4313650/beyond-mechanistic-control-causal-decision-processing-in-neuromorphic-military/
  41. “What Are Adversarial Attacks on AI & Machine Learning?” Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-are-adversarial-attacks-on-AI-Machine-Learning
  42. Wilcox, Lauren. “The Human as Technology in the ‘Human in the Loop’.” The Oxford Handbook of AI Governance, 2024. https://academic.oup.com/book/55103/chapter/423910139
  43. “Adversarial Machine Learning: A Taxonomy and Survey of Current Methods.” RAND Corporation, 2022. https://www.rand.org/content/dam/rand/pubs/research_reports/RRA800/RRA866-1/RAND_RRA866-1.pdf
  44. Michel, Arthur Holland. Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All. Houghton Mifflin Harcourt, 2019. (Referenced via review in The New Atlantis). https://www.thenewatlantis.com/wp-content/uploads/legacy-pdfs/20190725_TNA59Askonas.pdf
  45. “A falling stock price prompts Palantir to rebut security flaws claims.” TradeAlgo, October 4, 2025. https://www.tradealgo.com/news/a-falling-stock-price-prompts-palantir-to-rebut-security-flaws-claims
  46. Atherton, Kelsey D. “Air Force’s Unblinking ‘Gorgon Stare’ System Appears Half-Blind in Early Tests.” Popular Science, January 25, 2011. https://www.popsci.com/technology/article/2011-01/air-forces-unblinking-gorgon-stare-system-appears-half-blind-early-tests/
  47. Mitchell, Billy. “Google’s departure from Project Maven was a ‘little bit of a canary in a coal mine’.” FedScoop, November 5, 2019. https://fedscoop.com/google-project-maven-canary-coal-mine/
  48. “U.S. Navy’s Drone Boat Program Faces Crashes, Software Failures, And Leadership Shakeups.” DroneXL, August 20, 2025. https://dronexl.co/2025/08/20/us-navy-drone-boat-program-faces-crashes/
  49. “U.S. Navy’s Drone Boat Program Faces Crashes, Software Failures, And Leadership Shakeups.” DroneXL, August 20, 2025. https://dronexl.co/2025/08/20/us-navy-drone-boat-program-faces-crashes/
  50. Rajan, Kanaka. “The risks of artificial intelligence in weapons design.” Harvard Medical School News, October 2024. https://hms.harvard.edu/news/risks-artificial-intelligence-weapons-design
  51. Emery, David. “Ethical problems of military artificial intelligence.” Applied AI, 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC9510613/
  52. “US Navy drone boats crash during testing off California.” upday, August 20, 2025. https://www.upday.com/uk/world/us-navy-drone-boats-crash-during-testing-off-california/
  53. “Explainable Artificial Intelligence (XAI).” Swedish Defence Research Agency, 2019. https://www.foi.se/rest-api/report/FOI-R–4849–SE
  54. “Advanced Jamming Techniques Revolutionize Defense Strategies.” Mouser Electronics Blog, February 26, 2025. https://www.mouser.com/blog/advanced-jamming-techniques-revolutionize-defense-strategies
  55. “Artificial intelligence arms race.” Wikipedia. https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race
  56. “The AI Dual-Use Dilemma: Navigating the Risks of Generative AI.” StrongestLayer Blog. https://www.strongestlayer.com/blog/ai-dual-use-dilemma
  57. “New Video Shows Rare Drone-on-Drone Boat Collision off California.” The Maritime Executive, August 20, 2025. https://maritime-executive.com/article/new-video-shows-rare-drone-on-drone-boat-collision-off-california
  58. “Inside the Crucible: Anduril’s Secret to Rapid Development at Scale.” Anduril Industries. https://www.anduril.com/article/anduril-project-crucible/
  59. “Explainable Artificial Intelligence (XAI).” Swedish Defence Research Agency, 2019. https://www.foi.se/rest-api/report/FOI-R–4849–SE
  60. “Adversarial Attacks on Military AI Systems.” World Journal of Advanced Research and Reviews, 2025. https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-3084.pdf
  61. “A falling stock price prompts Palantir to rebut security flaws claims.” TradeAlgo, October 4, 2025. https://www.tradealgo.com/news/a-falling-stock-price-prompts-palantir-to-rebut-security-flaws-claims
  62. “The U.S. Navy has built a drone fleet in order to fight China. It’s not working out.” MarineLink, August 20, 2025. https://www.marinelink.com/blogs/blog/the-us-navy-has-built-a-drone-fleet-in-order-to-fight-china-its-103199
  63. “Software-Defined Defence: A Horizontally Scaled Platform Approach to Capability Development and Force Generation.” International Institute for Strategic Studies, February 17, 2023. https://www.iiss.org/globalassets/media-library—content–migration/files/research-papers/iiss_software-defined-defence_17022023.pdf
  64. Clark, Colin. “Gorgon Stare test uncovers major glitches.” Defense One, January 25, 2011. https://www.defenseone.com/defense-systems/2011/01/gorgon-stare-test-uncovers-major-glitches/192796/
  65. Ferreira, Brian. “The Implications of Explainable Artificial Intelligence in Automated Warfare.” Defence & Security Foresight Group, University of Waterloo. https://uwaterloo.ca/defence-security-foresight-group/sites/default/files/uploads/documents/ferreira_implications-of-explainable.pdf
  66. “The U.S. Navy has built a drone fleet in order to fight China. It’s not working out.” MarineLink, August 20, 2025. https://www.marinelink.com/blogs/blog/the-us-navy-has-built-a-drone-fleet-in-order-to-fight-china-its-103197
  67. “3 Common Challenges and Solutions when Implementing Zero Trust.” Tufin Blog. https://www.tufin.com/blog/3-challenges-and-solutions-implementing-zero-trust
  68. “Electronic warfare.” Wikipedia. https://en.wikipedia.org/wiki/Electronic_warfare
  69. “Mapping AI to The Naval Kill Chain.” Naval Postgraduate School. https://nps.edu/documents/10180/142489929/NEJ+Hybrid+Force+Issue_Mapping+AI+to+The+Naval+Kill+Chain.pdf
  70. “US sea drones not prepared to give a fight to China.” The Eurasian Times, August 21, 2025. https://www.eurasiantimes.com/us-sea-drones-not-prepared-to-give-a-figh/
  71. “Ethical Challenges in AI-Enhanced Military Operations.” Frontiers in Big Data. https://www.frontiersin.org/research-topics/30941/ethical-challenges-in-ai-enhanced-military-operations/magazine
  72. Callister, Jamie Montague. “Navy Pilot Perishes Over Iraq.” BYU Magazine, Summer 2003. https://magazine.byu.edu/article/navy-pilot-perishes-over-iraq/
  73. “US sea drones not prepared to give a fight to China.” The Eurasian Times, August 21, 2025. https://www.eurasiantimes.com/us-sea-drones-not-prepared-to-give-a-figh/
  74. Richards, Neil M. “The Dangers of Surveillance.” Harvard Law Review, 2013. http://cordellinstitute.wustl.edu/wp-content/uploads/2020/11/Dangers-of-Surveillance-Richards.pdf
  75. “US Army memo says Palantir, Anduril’s NGC2 prototype has ‘fundamental security’ problems.” MiTrade, October 4, 2025. https://www.mitrade.com/insights/news/live-news/article-3-1171670-20251004
  76. “Military AI: Challenges to Human Accountability.” Carnegie Council for Ethics in International Affairs. https://internationalpolicy.org/publications/military-ai-challenges-human-accountability/
  77. Pikner, Grant. “Multi-Domain Military Deception.” Military Review, March-April 2021. https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MA-21/Pikner-Military-Deception.pdf
  78. “AI Joe: The Rise of Artificial Intelligence in the Biden-Harris Pentagon.” Public Citizen, October 2024. https://www.citizen.org/article/ai-joe-report/
  79. “Thinking About the Risks of AI: Accidents, Misuse, and Structure.” Lawfare, August 1, 2021. https://www.lawfaremedia.org/article/thinking-about-risks-ai-accidents-misuse-and-structure
  80. “Human Control in the Age of AI-Enabled Warfare.” U.S. Naval War College Digital Commons. https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?article=3115&context=ils
  81. Saltini, Alice. “Navigating Cyber Vulnerabilities in AI-Enabled Military Systems.” European Leadership Network, June 11, 2024. https://europeanleadershipnetwork.org/commentary/navigating-cyber-vulnerabilities-in-ai-enabled-military-systems/
  82. Grebe, M., & Nardone, R. “Mitigating Security Threats in Tactical Networks.” RTO-MP-IST-092, 2010. https://apps.dtic.mil/sti/tr/pdf/ADA584176.pdf
  83. “What’s Wrong With Public Video Surveillance?” American Civil Liberties Union. https://www.aclu.org/documents/whats-wrong-public-video-surveillance
  84. “How to Improve AI Red-Teaming: Challenges and Recommendations.” Center for Security and Emerging Technology, September 2024. https://cset.georgetown.edu/article/how-to-improve-ai-red-teaming-challenges-and-recommendations/
  85. “Pros and Cons of Autonomous Weapons Systems.” Military Review, May-June 2017. https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
  86. “Data Poisoning: What It Is and How to Protect Your AI.” Wiz Blog. https://www.wiz.io/academy/data-poisoning
  87. Saltini, Alice. “Navigating Cyber Vulnerabilities in AI-Enabled Military Systems.” European Leadership Network, June 11, 2024. https://europeanleadershipnetwork.org/commentary/navigating-cyber-vulnerabilities-in-ai-enabled-military-systems/
  88. “Are We Waking Up Fast Enough to the Dangers of AI Militarism?” CounterPunch, October 8, 2025. https://www.counterpunch.org/2025/10/08/are-we-waking-up-fast-enough-to-the-dangers-of-ai-militarism/
  89. “Modernizing Military Decision-Making: The Imperative of AI Integration.” Military Review, 2025. https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/
  90. “Military-Grade Artificial Intelligence (AI): Benefits, Risks and Ethical Considerations.” SCielo, 2024. https://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462024000200309
  91. “Modernizing Military Decision-Making: The Imperative of AI Integration.” Military Review, 2025. https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/
  92. “A Performance Comparison of Two Wireless Mesh Network Routing Protocols.” MDPI, September 2013. https://www.mdpi.com/1424-8220/13/9/11553
  93. “The State of DevSecOps in DoD.” DoD CIO Library, 2024. https://dodcio.defense.gov/Portals/0/Documents/Library/DevSecOpsStateOf.pdf
  94. Huitt, Joseph L. “Leadership: Artificial intelligence in decision-making.” U.S. Army, October 16, 2024. https://www.army.mil/article/286847/leadership_artificial_intelligence_in_decision_making
  95. McCarthy, Rory, and Oliver Burkeman. “Patriot in new ‘friendly fire’ incident.” The Guardian, April 4, 2003. https://www.theguardian.com/world/2003/apr/04/iraq.rorymccarthy4
  96. “Patriot SAM shoots down two friendly aircraft | Iraq 2003.” YouTube, Showtime112. https://www.youtube.com/watch?v=3AIzoz6h4nY
  97. “Countering GNSS Jamming and Spoofing for Aerospace and Defense Applications.” Taoglas Blog, September 22, 2025. https://www.taoglas.com/blogs/countering-gnss-jamming-and-spoofing-for-aerospace-and-defense-applications/
  98. Sherman, Justin. “Digital Authoritarianism and Implications for US National Security.” Cyber Defense Review, Winter 2021. https://cyberdefensereview.army.mil/Portals/6/Documents/2021_winter_cdr/06_CDR_V6N1_Sherman.pdf
  99. “List of unmanned aerial vehicle-related incidents.” Wikipedia. https://en.wikipedia.org/wiki/List_of_unmanned_aerial_vehicle-related_incidents
  100. “Gorgon Staring at You.” Black Agenda Report, July 21, 2021. https://www.blackagendareport.com/gorgon-staring-you
  101. “US Navy’s Long-Awaited Autonomous Boat Test Ends In Failure Off California.” Marine Insight, August 21, 2025. https://www.marineinsight.com/shipping-news/us-navys-long-awaited-autonomous-boat-test-ends-in-failure-off-california/
  102. Bode, Ingvild. “Falling Under the Radar: The Problem of Algorithmic Bias and Military Applications of AI.” ICRC Law & Policy Blog, March 14, 2024. https://blogs.icrc.org/law-and-policy/2024/03/14/falling-under-the-radar-the-problem-of-algorithmic-bias-and-military-applications-of-ai/
  103. “It’s Too Late: Why a World of Interacting AI Agents Demands New Safeguards.” SIPRI, 2025. https://www.sipri.org/commentary/essay/2025/its-too-late-why-world-interacting-ai-agents-demands-new-safeguards
  104. “Link 16 tactical data link communication via space: ‘A ground-breaking development’.” Space Development Agency. https://www.sda.mil/link-16-tactical-data-link-communication-via-space-a-ground-breaking-development/
  105. “Understanding the errors introduced by military AI applications.” Brookings Institution, September 1, 2022. https://www.brookings.edu/articles/understanding-the-errors-introduced-by-military-ai-applications/
  106. “DoD Zero Trust Strategy.” DoD CIO Library, October 21, 2022. https://dodcio.defense.gov/Portals/0/Documents/Library/DoD-ZTStrategy.pdf
  107. “Adversarial Machine Learning: A Taxonomy and Survey of Current Methods.” RAND Corporation, 2022. https://www.rand.org/content/dam/rand/pubs/research_reports/RRA800/RRA866-1/RAND_RRA866-1.pdf
  108. “How cyber criminals are compromising AI software supply chains.” IBM Think, October 2025. https://www.ibm.com/think/insights/cyber-criminals-compromising-ai-software-supply-chains
  109. “US Army sends memo to two of the biggest Silicon Valley companies.” The Times of India, October 4, 2025. https://timesofindia.indiatimes.com/technology/tech-news/us-army-sends-memo-to-two-of-the-biggest-silicon-valley-companies-memo-says-should-be-treated-as-very-high-risk/articleshow/124293220.cms
  110. “Military deception.” Wikipedia. https://en.wikipedia.org/wiki/Military_deception
  111. Grebe, M., & Nardone, R. “Mitigating Security Threats in Tactical Networks.” RTO-MP-IST-092, 2010. https://apps.dtic.mil/sti/tr/pdf/ADA584176.pdf
  112. “Governments are spending billions on ‘sovereign AI’ – but what is it?” The Guardian, October 9, 2025. https://www.theguardian.com/technology/2025/oct/09/governments-spending-billions-sovereign-ai-technology
  113. “Advanced Jamming Techniques Revolutionize Defense Strategies.” Mouser Electronics Blog, February 26, 2025. https://www.mouser.com/blog/advanced-jamming-techniques-revolutionize-defense-strategies
  114. “What’s Wrong With Public Video Surveillance?” American Civil Liberties Union. https://www.aclu.org/documents/whats-wrong-public-video-surveillance
  115. D’Urso, Angela. “Human Mis- and Overtrust in Machines in Warfare.” Journal of International Criminal Justice, 2023. https://academic.oup.com/jicj/article/21/5/1077/7281035
  116. Ekelhof, M.A.C. “Implications of Emergent Behavior on Ethical Artificial Intelligence Principles for Defense.” Lieber Institute, West Point, October 26, 2022. https://lieber.westpoint.edu/implications-emergent-behavior-ethical-artificial-intelligence-principles-defense/
  117. “US Navy is developing naval drones, but they are failing during testing.” Mezha.Media, August 21, 2025. https://mezha.media/en/oboronka/vms-ssha-stvoryuyut-flot-morskih-bezpilotnikiv-304191/
  118. Bode, Ingvild. “Falling Under the Radar: The Problem of Algorithmic Bias and Military Applications of AI.” ICRC Law & Policy Blog, March 14, 2024. https://blogs.icrc.org/law-and-policy/2024/03/14/falling-under-the-radar-the-problem-of-algorithmic-bias-and-military-applications-of-ai/
  119. “Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human–Machine Interaction.” SIPRI, June 2021. https://www.sipri.org/publications/2021/policy-reports/autonomous-weapon-systems-and-international-humanitarian-law-identifying-limits-and-required-type
  120. “What Can Generative AI Red-Teaming Learn from Cyber Red-Teaming?” Carnegie Mellon University Software Engineering Institute, 2025. https://www.sei.cmu.edu/documents/6301/What_Can_Generative_AI_Red-Teaming_Learn_from_Cyber_Red-Teaming.pdf
  121. “Docklands drone swarm accident highlights importance of system knowledge, active alerting.” Australian Transport Safety Bureau, July 15, 2025. https://www.atsb.gov.au/media/news-items/2025/docklands-drone-swarm-accident-highlights-importance-system-knowledge-active-alerting
  122. “US Navy’s Long-Awaited Autonomous Boat Test Ends In Failure Off California.” Marine Insight, August 21, 2025. https://www.marineinsight.com/shipping-news/us-navys-long-awaited-autonomous-boat-test-ends-in-failure-off-california/
  123. Mahadzir, Dzirhan. “Navy Tests Autonomous Vessels in Recent Multilateral Exercises.” USNI News, October 6, 2025. https://news.usni.org/2025/10/06/navy-tests-autonomous-vessels-in-recent-multilateral-exercises
  124. “Governments are spending billions on ‘sovereign AI’ – but what is it?” The Guardian, October 9, 2025. https://www.theguardian.com/technology/2025/oct/09/governments-spending-billions-sovereign-ai-technology
  125. “Tactical data link.” Wikipedia. https://en.wikipedia.org/wiki/Tactical_data_link
  126. “Artificial intelligence arms race.” Wikipedia. https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race
  127. “Explainable Artificial Intelligence (XAI).” DARPA. https://www.darpa.mil/research/programs/explainable-artificial-intelligence
  128. Gibson, Hise O. “What an Army Commander Learned About Using AI to Combat Cyberattacks.” Harvard Business School Working Knowledge, July 15, 2025. https://www.library.hbs.edu/working-knowledge/what-an-army-commander-learned-about-using-ai-to-combat-cyberattacks
  129. “Military-Grade Artificial Intelligence (AI): Benefits, Risks and Ethical Considerations.” SCielo, 2024. https://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462024000200309
  130. “Reducing the Risks of Artificial Intelligence for Military Decision Advantage.” Center for Security and Emerging Technology, October 2021. https://cset.georgetown.edu/publication/reducing-the-risks-of-artificial-intelligence-for-military-decision-advantage/
  131. Shapovalova, Y., Vertebnyi, V., & Marsel, M. “Cybersecurity of Mesh Networks: Modern Challenges and Innovations.” Kharkiv National University of Economics, 2024. http://repository.hneu.edu.ua/bitstream/123456789/34902/1/%D0%A2%D0%B5%D0%B7%D0%B8%20%D0%A8%D0%B0%D0%BF%D0%BE%D0%B2%D0%B0%D0%BB%D0%BE%D0%B2%D0%B0_%D0%92%D0%B5%D1%80%D1%82%D0%B5%D0%B1%D0%BD%D0%B8%D0%B9_%D0%9C%D0%B0%D1%80%D1%81%D0%B5%D0%BB%D1%8C.pdf
  132. “The Practical Role of ‘Test and Evaluation’ in Military AI.” Lawfare, October 1, 2024. https://www.lawfaremedia.org/article/the-practical-role-of–test-and-evaluation–in-military-ai
  133. “Adversarial Machine Learning Poses New Threat to National Security.” AFCEA Signal Magazine, October 1, 2025. https://www.afcea.org/signal-media/cyber-edge/adversarial-machine-learning-poses-new-threat-national-security
  134. “AI and the Future of Warfare.” Finabel, July 2024. https://finabel.org/wp-content/uploads/2024/07/FFT-AI-and-the-future-of-warfare-ED.pdf
  135. Pikner, Grant. “Multi-Domain Military Deception.” Military Review, March-April 2021. https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/March-April-2021/Pikner-Military-Deception/
  136. “Allies and Artificial Intelligence: Obstacles to Operations and Decision-Making.” Texas National Security Review, March 2020. https://tnsr.org/2020/03/allies-and-artificial-intelligence-obstacles-to-operations-and-decision-making/
  137. “Military data links.” Bundeswehr, November 2023. https://www.bundeswehr.de/en/military-data-links-5676750
  138. Vincent, Brandi. “Navy experiment cut short after unmanned vessel flipped a support boat.” DefenseScoop, July 1, 2025. https://defensescoop.com/2025/07/01/navy-unmanned-vessel-accident-boat-ventura-channel-islands-california/
  139. “Bias in Military AI.” SIPRI, December 2024. https://www.sipri.org/sites/default/files/2024-12/background_paper_bias_in_military_ai_0.pdf
  140. “The AI Dual-Use Dilemma: Navigating the Risks of Generative AI.” StrongestLayer Blog. https://www.strongestlayer.com/blog/ai-dual-use-dilemma
  141. “US Navy’s drone fleet faces major setbacks in bid to counter China.” News.Az, August 21, 2025. https://news.az/news/us-navys-drone-fleet-faces-major-setbacks-in-bid-to-counter-china
  142. “Palantir Stock: Why the NGC2 security loophole may be more than just a hiccup.” Invezz, October 3, 2025. https://www.tradingview.com/news/invezz:1ae87840c094b:0-palantir-stock-why-the-ngc2-security-loophole-may-be-more-than-just-a-hiccup/
  143. “Artificial intelligence for decision support in C2 systems.” Swedish Defence Research Agency, 2018. https://foi.se/download/18.41db20b3168815026e010/1548412090368/Artificial-intelligence-decision_FOI-S–5904–SE.pdf
  144. “Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All.” Cato Institute Events, July 25, 2019. https://www.youtube.com/watch?v=lbYac5hm4BQ
  145. “A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making in the Use of Force.” Human Rights Watch, April 28, 2025. https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
  146. “What Is Data Poisoning?” Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-is-data-poisoning
  147. “What Can Generative AI Red-Teaming Learn from Cyber Red-Teaming?” Carnegie Mellon University Software Engineering Institute, 2025. https://www.sei.cmu.edu/documents/6301/What_Can_Generative_AI_Red-Teaming_Learn_from_Cyber_Red-Teaming.pdf
  148. “AI Safety and Automation Bias.” Center for Security and Emerging Technology, September 2021. https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-and-Automation-Bias.pdf
  149. “Palantir shares sink after leaked army memo highlights security vulnerabilities.” Investing Live, October 3, 2025. https://investinglive.com/stocks/palantir-shares-sink-after-leaked-army-memo-highlights-security-vulnerabilities-20251003/
  150. “Reducing the Risks of Artificial Intelligence for Military Decision Advantage.” Center for Security and Emerging Technology, October 2021. https://cset.georgetown.edu/wp-content/uploads/CSET-Reducing-the-Risks-of-Artificial-Intelligence-for-Military-Decision-Advantage.pdf
  151. “The Interpretation and Application of International Humanitarian Law in Relation to Lethal Autonomous Weapon Systems.” UNIDIR, 2021. https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems/
  152. “Explainable Artificial Intelligence (XAI).” DARPA. https://www.darpa.mil/research/programs/explainable-artificial-intelligence
  153. “From Advantage to Attack Surface: Adversarial AI is the Next Front in Cyber Warfare.” Breakpoint Labs. https://breakpoint-labs.com/from-advantage-to-attack-surface-adversarial-ai-is-the-next-front-in-cyber-warfare/
  154. “Reducing the Risks of Artificial Intelligence for Military Decision Advantage.” Center for Security and Emerging Technology, October 2021. https://cset.georgetown.edu/wp-content/uploads/CSET-Reducing-the-Risks-of-Artificial-Intelligence-for-Military-Decision-Advantage.pdf
  155. “US Navy’s Long-Awaited Autonomous Boat Test Ends In Failure Off California.” Marine Insight, August 21, 2025. https://www.marineinsight.com/shipping-news/us-navys-long-awaited-autonomous-boat-test-ends-in-failure-off-california/
  156. “The Backlash Against Military AI: Public Sentiment, Ethical Tensions, and the Future of Autonomous Warfare.” Trends Research & Advisory. https://trendsresearch.org/insight/the-backlash-against-military-ai-public-sentiment-ethical-tensions-and-the-future-of-autonomous-warfare/
  157. “Guiding Principles on Government Use of Surveillance Technologies.” Freedom Online Coalition. https://freedomonlinecoalition.com/guiding-principles-on-government-use-of-surveillance-technologies/
  158. “DoD Agency Propelling the Evolution of AI Red Teaming.” AFCEA Signal Magazine, October 1, 2024. https://www.afcea.org/signal-media/emerging-edge/dod-agency-propelling-evolution-ai-red-teaming
  159. “Why military AI needs urgent regulation.” Diplo. https://www.diplomacy.edu/blog/why-military-ai-needs-urgent-regulation/
  160. “Military AI: Challenges to Human Accountability.” Carnegie Council for Ethics in International Affairs. https://internationalpolicy.org/publications/military-ai-challenges-human-accountability/
  161. “Innovating Defense: Generative AI’s Role in Military Evolution.” U.S. Army, October 11, 2024. https://www.army.mil/article/286707/innovating_defense_generative_ais_role_in_military_evolution
  162. “Offensive and Defensive Use of Open-Source Information and Artificial Intelligence in Nuclear Security.” Idaho National Laboratory, 2022. https://inldigitallibrary.inl.gov/sites/sti/sti/Sort_57369.pdf
  163. “Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.” Congressional Research Service, October 23, 2020. https://www.congress.gov/crs-product/IF11150
  164. “DoD Zero Trust Use Case.” Forward Networks. https://www.forwardnetworks.com/wp-content/uploads/2024/09/DoD-Zero-Trust-Use-Case.pdf
  165. “3 Common Challenges and Solutions when Implementing Zero Trust.” Tufin Blog. https://www.tufin.com/blog/3-challenges-and-solutions-implementing-zero-trust
  166. “What is AI Vulnerability Management?” SentinelOne. https://www.sentinelone.com/cybersecurity-101/cybersecurity/ai-vulnerability-management/
  167. “Zero Trust and Industry.” Defense Acquisition Magazine, March-April 2023. https://www.dau.edu/library/damag/march-april2023/zerotrustandindustry
  168. “Zero Trust.” Department of the Navy CIO. https://www.doncio.navy.mil/ContentView.aspx?id=17903
  169. “AI’s Growing Role in the Russia-Ukraine Conflict.” Army War College War Room, October 2, 2024. https://warroom.armywarcollege.edu/articles/ais-growing-role/
  170. “Securing Artificial Intelligence for Battlefield Effective Robustness (SABER).” DARPA. https://www.darpa.mil/research/programs/saber-securing-artificial-intelligence
  171. “Wireless mesh network.” Wikipedia. https://en.wikipedia.org/wiki/Wireless_mesh_network
  172. “Adversarial Machine Learning Poses New Threat to National Security.” AFCEA Signal Magazine, October 1, 2025. https://www.afcea.org/signal-media/cyber-edge/adversarial-machine-learning-poses-new-threat-national-security
  173. “What Are Adversarial Attacks on AI & Machine Learning?” Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-are-adversarial-attacks-on-AI-Machine-Learning
  174. “What is AI Red Teaming?” Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-is-ai-red-teaming
  175. “Aviation Brief: Why Cyber Risk Has Taken Off.” S&P Global Ratings, October 2025. https://www.spglobal.com/ratings/en/regulatory/article/aviation-brief-why-cyber-risk-has-taken-off-s101647100
  176. Polyakova, Alina, and Chris Meserole. “How Autocrats Weaponize AI, and How to Fight Back.” Journal of Democracy, October 2024. https://www.journalofdemocracy.org/online-exclusive/how-autocrats-weaponize-ai-and-how-to-fight-back/
  177. “The Global AI Race: Competing Visions and Rising Risks.” *The International Risk

Comments

Leave a Reply