89 interface, camera serial interface (MIPICSI) or one of the many other protocols for which autopilot manufacturers must account. As the uncrewed industry wants more powerful hardware and in higher quantities, autopilot makers are expanding their facilities, with many having produced thousands of autopilots in 2023, and planning to triple or quadruple that number this year. With SAILs unfurled In addition to key regulations, such as DO-178C and DO-254, which are critical standards for autopilot software and hardware, new specific assurance and integrity level (SAIL) standards have been published by the European Union Aviation Safety Agency (EASA), with the Federal Aviation Administration (FAA) to publish close equivalents of its own. These lay out the degree of risk to be undertaken – and hence the hardware and software robustness required – by an uncrewed system in operation, going in tiers from SAIL-1 to SAIL-6, indicating the lowest to highest levels of risk certifiable under a Specific Operations Risk Assessment (SORA) flight application. The higher SAIL categories of SAIL-5 and SAIL-6 (which might pertain to very large UAVs flying over populated areas) require particularly comprehensive validation and verification of UAV subsystems, potentially via Type Certification. Autopilots for such applications will be scrutinised for each step of how they have been designed, programmed, manufactured, tested and integrated. Other SAIL standards are in the process of being written, with SAIL-3 closely influenced by the DAL D level of DO-254, and SAIL-4 yet to be defined. Key functional requirements of autopilots going forward will include proving at least two key parameters. The first point is the rarity of failures in such systems; it is likely that SAIL-3 will require the probability of failure to be below 1 in 10-4, while SAIL-4 or SAIL-5 may specify 1 in 10-9. The second proof point will be how reliably autopilots geofence their platforms against accidental breaches of permitted airspace. The software and hardware must robustly minimise the probability of an accidental slip outside of geofenced boundaries, and then stop the vehicle from going much further in the highly unlikely event of a breach. One way to achieve that is to write processes into an autopilot for opening a parachute if the breach has been caused by severe damage, forcing the aircraft into an uncontrolled descent. Alternatively, if a UAV is too large to be stopped or slowed by parachute, machine-vision software could analyse the terrain below and calculate the best control outputs (using whatever flightcontrol surfaces remain intact) for a safe emergency landing. Embedded maps from local authorities could alternately delineate acceptable locations for emergency landings. Either way, requirements will have a considerable impact on how UAV autopilots must be designed and built, particularly regarding the kinds of components, functionalities and tests that will need to go into them, and their ancillaries. His ARMs wide At the veritable ‘core’ of every autopilot is a CPU or microcontroller; nowadays, likely to be built around one of the latest ARM processors, responsible for the processing of flight-critical information inputs into control surface commands, but as integrity requirements increasingly equate hardware’s airworthiness level with its level of redundancy, it is becoming more common for autopilots to come with two, three or more such cores. This requirement spirals outwards, as UAVs cannot claim real redundancy Autopilots | Focus There is a growing need for AI, along with the processing power to enable it in today’s autopilot systems, as well as copious connectivity Uncrewed Systems Technology | April/May 2024 EASA’s specific assurance and integrity levels (SAILs) detail targets in the rarity of failures and reliability of geofencing for autopilot certifiability (Image courtesy of UAV Navigation)
RkJQdWJsaXNoZXIy MjI2Mzk4