Intregated Computer Solutions is now part of The One 23 Group!

By: Dave Bosko Unpacking the offensive playbook: How adversaries could exploit AI vulnerabilities through data poisoning in CCA operations.

Perspective: Offensive Cyber Doctrine

Introduction

As the U.S. Air Force accelerates deployment of its Collaborative Combat Aircraft (CCA) program, the promise of autonomous swarming drones teaming seamlessly with crewed fighters is closer to reality than ever before. Yet, with every jump in AI-enabled autonomy comes a new cyber threat:  the prospect of adversarial actors sabotaging these systems not through missiles or malware, but by subverting the AI’s very intelligence using tactics known as data poisoning and adversarial AI.

The Offensive Playbook: Understanding the Threat

Here’s a mock scenario:  A mission somewhere in INDOPACOM.  What if a CCA misclassifies a friendly F-35 as a hostile drone because a sensor was spoofed in Guam?  How might that happen?

Data poisoning is the deliberate insertion of malicious, misleading, or subtly corrupted data into the training sets used to “teach” AI systems. In the context of CCA, this could mean anything from corrupted sensor feeds during simulation exercises to tampered datasets slipped into software supply chains. Once compromised, an AI can make catastrophic errors, misclassifying targets, ignoring threats, or even failing to return safely.

Adversarial AI, meanwhile, involves crafting inputs, such as deceptive radar or image signals that fool trained models in real-time. For example, an enemy could generate spoofed sensor data that causes CCAs to misidentify enemy assets as friendly, or to ignore genuine threats entirely. Such techniques have been demonstrated in commercial self-driving cars and voice assistants, but military systems like CCA present a far more consequential target.

Attack vectors in the CCA environment may include:

· Supply chain infiltration: Corrupting open-source or third-party data used in LLM training.

· Sensor spoofing: Feeding CCAs with adversarial sensor data during operational missions.

· Insider threats: Malicious actors within allied or contractor organizations introducing poisoned data or LLM updates.

· Simulation compromise: Manipulating digital twins or virtual training environments to introduce vulnerabilities.

Real-World Impacts: What’s at Stake

The consequences of successful data poisoning or adversarial attacks on CCA could be disastrous for the US:

· Loss of mission autonomy: CCAs could act unpredictably or become unresponsive in critical moments.

· Fratricide or collateral damage: Misclassification may result in friendly-fire incidents or unintended civilian harm.

· Operational paralysis: Trust in AI is eroded, forcing commanders to revert to less agile, manual modes.

Historical cyber incidents, such as the 2016 “Tay” chatbot poisoning or manipulated image datasets in facial recognition, demonstrate even sophisticated AI is not immune to subtle, hard-to-detect corruption. In a military context, these vulnerabilities are potential force multipliers for adversaries.

Conclusion

As the Air Force’s CCA program leads the charge into next-generation air combat, understanding and anticipating the offensive tactics of data poisoning and adversarial AI is mission-critical. Cyber defense can no longer rely solely on traditional IT hardening; it must anticipate how adversaries may weaponize the data and algorithms themselves. The US must invest in proactive countermeasures, robust data verification, adversarial training, and continuous LLM auditing if it hopes to stay ahead in the age of AI-enabled warfare.

Shields Up: Defensive Postures Against AI Manipulation in Military Swarms

A defender’s perspective on identifying, monitoring, and countering data poisoning in CCA AI systems. Perspective: Defensive Cybersecurity Operations Introduction As the US Air Force’s Collaborative Combat Aircraft (CCA) program integrates advanced AI into networked swarms, defending these systems from manipulation is as vital as any physical armor. With adversaries actively

Read More »
Newsletter sign-up