USE Network launch I UAV Works VALAQ l Cable harnesses l USVs insight l Xponential 2020 update l MARIN AUV l Suter Industries TOA 288 l Vitirover l AI systems l Vtrus ABI
90 June/July 2020 | Unmanned Systems Technology This SNN approach consumes about 100 times less power than a GPU on a standard framework, and five times less than an optimised chip. It is also scalable, as a network five times larger uses only 30% more power. Power consumption has been a challenge for using machine learning for image recognition in aerial systems, as a GPU for image recognition can draw currents of up to 10 A, limiting its use to larger UAVs that have an ICE engine that can generate enough power. However, smaller system-on-chip (SoC) devices have been used to implement a visual navigation system on a micro-UAV that weighs just 27 g. This uses a fixed-point, closed-loop adaptation of the DroNet CNN that is already optimised for autonomous navigation for standard-sized UAVs. The system operates on a power budget of just 64 mW to deliver a throughput of 6 frame/s, rising to 18 frame/s within a 284 mW power budget. The pattern recognition capability is used on the ground in different ways. For example, it can be used for calibrating autopilot systems, training the neural network with data generated from sensors and other autopilots. This enables automatic test pattern procedures on all autopilot systems. It can also find hidden patterns in the test data that can show up potential problems later on. One interesting area of development for larger UAVs is to use the image recognition on the ground, for example to steer a UAV around an airport instead of relying on a human remote operator. This requires a relatively small set of images, based around the visual signs and lights provided for human pilots. Demonstrating this capability could be a major step in allowing UAVs to operate in civilian airspace. Having machine learning on board a UAV for movement around an airport opens up another intriguing capability. A DNN is well-suited to identifying the arm movements of a marshal on the apron to control a UAV directly. Its framework can be trained to identify the set of arm movements, seen from different angles with many repetitions, to ensure accurate recognition. This capability is currently being tested, and highlights the breadth of capabilities that machine learning can provide. Neural network technology is moving from large, general-purpose GPUs running generic frameworks to more heterogeneous SoC designs that combine CPU and GPU cores with neural network accelerators. The accelerators are tuned to specific frameworks, combining hardware designs, software tools and simulation. This will bring a broader range of AI chips that have been optimised for different unmanned systems, along with software tools and APIs to simplify the development, training and deployment of the neural networks. That in turn will open up even more applications. Acknowledgements The author would like to thank Miguel Angel de Frutos Carro at UAV Navigation, Andrew Grant and Russell Jones at Imagination Technologies, Dinakar Munagala at Blaize, Nick Ni at Xilinx, Ilja Ocket at imec and Tony King-Smith at AImotive for their help with researching this article. Neural networks are being used for a visual navigation system that uses a feed from a video camera (Courtesy of UAV Navigation) Having machine learning on a UAV opens up the capability for a marshal on an airport apron to control the vehicle directly
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4