Uncrewed Systems Technology 047 l Aergility ATLIS l AI focus l Clevon 1 UGV l Geospatial insight l Intergeo 2022 report l AUSA 2022 report I Infinity fuel cell l BeeX A.IKANBILIS l Propellers focus I Phoenix Wings Orca

43 AI | Focus Developers can use pre-trained AI building blocks, including models for detecting and avoiding obstacles, and executing precision landings, as well as simulations of weather, physics and the sensors used on a UAV. CNN frameworks are also being used for visual navigation on a UAV during a mission. A camera on the UAV can capture images of the ground, which can be matched against a framework running on a microcontroller. This can identify structures on the ground that can be matched to the structures on a map to provide information about the location. That can be combined with data from the IMU and GNSS navigation to feed into the autopilot. Using the image data and AI inference during the flight can provide more accuracy for the positioning if there are issues with the GNSS. Radar AI is also used to improve the performance of radar sensors in autonomous vehicles. It can be used to reduce the noise in the point cloud, providing more reliable data on the position and speed of objects. Radar and Lidar have different challenges to visual images for the AI framework. Because the input is a point cloud rather than a more dense array of pixels, even the smallest changes to the incoming data during inference are enough for the output to collapse. That means objects are not detected or are detected incorrectly, which would be devastating in the autonomous driving use case. The framework is therefore trained using noisy data with the desired output values, testing different models to identify the smallest and fastest ones by analysing the memory space and the number of computing operations required per denoising process. The most efficient models are then compressed by quantisation to reduce the bit widths – that is, the number of bits used to store the model parameters. This produces a framework with an accuracy of 89% using just 8 bits, equivalent to a framework using 32 bits but with only 218 kbytes of memory, a reduction in storage space of 75%. Hardware AI accelerators for image processing are now being added to low-power microcontrollers to handle inference with low power consumption for UAVs and AGVs. The latest dynamically reconfigurable processor (DRP-AI) accelerator adds a compiler framework and open source deep-learning compiler. A translator tool converts the framework into code that can run on the DRP-AI. Framework development tools such as the open source ONNX tool, PyTorch and Tensorflow are used to build up the frameworks, although there are other proprietary tools that are optimised for certain applications, particularly for driverless cars. One challenge for embedded systems developers who want to implement ML is to keep up with the constantly evolving AI models. Additional tools for the DRP- AI engine allow engineers to expand AI frameworks and models that can be converted into executable formats, allowing them to bring the latest image recognition capabilities to embedded devices using new AI models. For example, the resolution of image sensors continues to increase – sensors that are 1080 pixels wide are becoming more popular, but some of the most common frameworks have been trained on images that are only 244 pixels wide. While it is still possible to use these frameworks, more processing has to be performed for scaling, and the Uncrewed Systems Technology | December/January 2023 Using AI to identify features on the ground for UAV navigation (Courtesy of UAV Navigation) The Atlan processor for driverless cars aims to jump a generation to boost the performance of the AI to more than 1000 TOPS (Courtesy of Nvidia)

RkJQdWJsaXNoZXIy MjI2Mzk4