Issue 40 Unmanned Systems Technology October/November 2021 ANYbotics ANYmal D l AI systems focus l Aquatic Drones Phoenix 5 l Space vehicles insight l Sky Eye Rapier X-25 l FlyingBasket FB3 l GCS focus l AUVSI Xponential 2021
40 used hundreds of thousands of hours of data with a database of 19 Tbytes. The data was collected from systems operating in a number of taxis and aircraft with offline post-processing to get higher accuracy for training the framework. AI in air traffic control Reinforcement learning and rule- based AI are also being used to control fleets of UAVs from different suppliers and operators. This is essentially a large-scale, heterogeneous swarm implementation that combines reinforcement learning with a decision tree and real-time risk assessments. The goal is to launch as many missions as possible in a given area in a given period of time. The AI framework starts with path planning, where a set of paths for the UAVs takes into account problems that are known to occur. There is extensive static data on the performance limitation of each UAV which comes from both the model specification and the history of previous flights from the operator, combined with environmental data such as wind speeds, cloud cover and other factors that might influence the operation of the craft. This also includes environmental GPS maps, where signal strength varies, and where the comms signals and 4G or 5G are stronger or weaker. Each UAV can also be a data collector itself, for example by providing information about the wind and quality of RF links. This data can be used to modify current missions in real time and in planning new flights. More than 5000 data points are collected every 500 ms per UAV on a mission, which with up to 5000 UAVs in the air can be a lot of data. This data is fed into an offline data analyser database that builds up a detailed real-time model of the environment that is used to plan the path. This is saved for a year or so in a database that is more than 1 Tbyte in size, not including video feeds sent from the aircraft. The data analyser looks at the speed, separation, density of aircraft and any potential risks, and creates flight plans that take all these factors into account for thousands of aircraft to get the best density of operation while remaining safe. Because the path planning takes into account the things that might happen during a mission, such as dangerous path junctions with multiple UAVs in the same airspace, those areas can be given more space and buffer zones to prevent potential accidents. For real-time operation, the AI is used to predict which UAV has the most chance of violating the flight path. For example, if a planned stop is missed then there is a higher risk that the next turn might be missed, so neighbouring UAVs will be told to increase the separation, particularly in the area of a change in direction. However, if there are 5000 UAVs in the same area then that is a challenge for CPU performance. The system will then focus on the 15 aircraft that are most likely to violate the rules, monitoring them more closely using the risk analysis, path planning and environmental conditions, to establish whether there might be more risk from weather, loss of satellite signal or cellular link. But a UAV can lose contact with the air traffic control system, and in those circumstances the AI on board the UAV can fly the aircraft autonomously if it loses GPS or other navigation sensors. For example, the AI can use a vision camera to identify a position to fly safely to, so that other sensors can be used. This onboard AI can be 40% more accurate than a Kalman filter, which equates to 7 or 8 m over 1 km, to safely get to a GPS area or emergency landing zone, while other aircraft in the sky are kept clear by the air traffic control system. This combination of cloud and local AI is key, as there is a trade-off between the decision time of the central AI and the local AI, depending on the real-time nature of the data. For example, if a UAV is landing in a given area, then analysis of images of the ground to determine the landing accuracy to within 10 cm need to be handled on the UAV’s AI processor. After the UAV lands, that raw data can be added to the analysis model for offline processing. The comms with the UAVs also helps the system determine the latency of the network. It typically takes 1-1.5 seconds from the time the decision engine recommends a course of action to the system delivering this to the UAV, with 4 to 5 seconds to implement it. The aim is to predict up to 20 seconds ahead for any collision events, as that correlates with the 30 m separation for UAVs. Beyond that time frame there are repercussions, and the paths of all the UAVs would need to be recalculated as there would be too many knock-on effects. As the system knows the flight path of all the UAVs in the airspace, it can take advantage of every empty spot for missions. In those cases the system calculates more than 20 seconds ahead, and alters the course of UAVs that are 5 or 10 minutes away to avoid the area, but the main focus is on interactions that are likely to occur in the 5-20 second time frame. In less than 5 seconds, it is the task of the local processor to calculate any evasive manoeuvre. The local processing typically uses 5-10% of the capability on the UAV, although this can be up to 90% for real- time navigation, so there is a trade-off October/November 2021 | Unmanned Systems Technology Machine learning is being used to improve the quality of signals from a digital optical gyroscope (Courtesy of Advanced Navigation)
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4