114 A series of live air combat simulations carried out by the US Air Force, pitting an autonomously controlled F16 against experienced pilots in similar fighters, reportedly showed the AI-flown machine could hold its own, and would have probably outperformed less expert pilots. At this significant milestone in military robotics, it’s worth taking a look at the types of AI being applied to the discipline of air combat. Naturally, the three main flavours of machine learning are prominent, with supervised learning used to train models on labelled data. This is essential for pattern identification and decision-making in combat, such as recognising enemy aircraft and predicting their movements so that the system can counter them. Unsupervised learning is valuable in clustering and identifying unknown patterns in data that may indicate new enemy tactics or unexpected environmental conditions. Supervised and unsupervised learning are supported by reinforcement learning, which is used to train AI agents to make decisions through trial and error, optimising air-to-air combat strategies by simulating many scenarios and learning from each encounter. Machine learning’s counterpart, deep learning, is applied in the form of convolutional neural networks (CNNs), recurrent neural networks (RNNs) and long short-term memory networks (LSTMN). CNNs are used for image and video analysis, supporting target recognition and tracking tasks. Meanwhile, RNNs and LSTMNs carry out sequential data processing of historical and real-time data that can help predict a foe’s manoeuvres. Autonomous decision-making algorithms that take advantage of expert systems and Bayesian networks are also employed. Expert systems are built to encapsulate human expertise in sets of rules that provide a foundational layer for decision-making in well-defined scenarios, while the strength of Bayesian networks lies in probabilistic reasoning and decisionmaking under uncertainty, adding value in assessing risks and potential outcomes associated with different actions. Robotic process automation takes on routine tasks and system checks, so more computational resources are available for complex decision-making in combat. AI developers have turned to nature for inspiration for swarm intelligence that uses a small number of simple rules that enables large groups of autonomous robotic systems, such as aircraft, to coordinate manoeuvres without the necessity for centralised control or large amounts of information sharing. Also, multi-agent systems complement swarm intelligence to enable groups of AI agents to share information and coordinate combat actions to achieve high-level objectives. The type of AI exploited in commercially derived and dedicated military drones used by both sides in the Ukraine war is computer vision, and it is also a component of AI-based autonomy in fighters. Real-time processing of camera imagery is integrated to enable navigation and detection, and recognition and engagement of targets. Natural language processing is designed to facilitate communication between human pilots and AI systems, as well as between multiple autonomous systems. Key US programmes in this area include: DARPA’s Air Combat Evolution (ACE), focused on developing algorithms capable of executing complex manoeuvres in dogfighting, defined as combat within visual range; AlphaDogfight, which pitted AI against human pilots in virtual simulations; and the US Air Force’s Skyborg programme, centred on AI-driven Loyal Wingman combat drones designed to support crewed combat aircraft. Other nations, notably Russia and China, have similar programmes, which bring the long-anticipated possibility of robot-onrobot combat closer to reality. August/September 2024 | Uncrewed Systems Technology PS | AI for fighters Now, here’s a thing AI developers have turned to nature for inspiration for swarm intelligence that uses simple rules that enable large groups of autonomous robotic systems
RkJQdWJsaXNoZXIy MjI2Mzk4