Issue 45 | Uncrewed Systems Technology Aug/Sept 2022 Tidewie USV Tupan | Performance monitoring | Bayonet 350 | UAVs insight | Xponential 2022 | ULPower UL350i and UL350iHPS | Elroy Air Chaparral | Gimbals | Clogworks Dark Matter

114 PS | Visual navigation A long with many other animals, we humans are visual creatures who rely heavily on the sense of sight to find our way through complex, often hazardous environments – so much so that visual impairments force people to make major adaptations in order to get around safely (writes Peter Donaldson). As it comes so naturally to us, the difficulty of providing uncrewed vehicles with a reliable autonomous visual navigation capability comes as a shock, but it is a deep problem that is exercising scientists and engineers involved in sensing, computer vision and AI. Visual navigation is often treated as something to fall back on if GNSS cannot be relied on, and is valuable in that context, but its usefulness goes beyond that. Even when available, GNSS has to be used in conjunction with a map to be useful for navigation, along with some reliable means of avoiding obstacles. While better sensors with higher resolution and the ability to provide range information for each pixel are very important, building the picture and making sense of it happens in the computer that processes sensor inputs, just as human vision is processed in the brain. Computer vision underpins visual navigation, and is in turn underpinned by AI, including a branch of machine learning known as deep learning (DL) and by convolutional neural networks (CNNs). According to IBM’s description, DL uses algorithms that process unstructured data such as images, and automates feature extraction. For example, in a DL system tasked with distinguishing between animals, algorithms can create a hierarchy of features to determine which are most important in telling one species from another, adjusting themselves so that they make more accurate judgements about new images. The types of learning that DL is capable of are generally referred to as supervised, unsupervised and reinforcement learning. In supervised learning, the algorithm relies on data sets labelled by human experts, while in unsupervised learning the algorithm detects patterns in the image data, organising it into clusters with distinguishing characteristics. In reinforcement learning, the algorithm takes actions in a simulated environment and is rewarded when those actions bring it closer to achieving a goal set by the programmer, the algorithm being programmed to maximise its own rewards. In recent years, CNNs have brought major improvements in computer vision’s ability to recognise objects, aided by increases in computing power generally and the use of GPUs in particular. CNNs use matrix multiplication, a principle from linear algebra to identify patterns in images, along with other mathematical techniques to minimise errors in network outputs. An academic review (“Deep Learning for Visual Navigation of Unmanned Ground Vehicles”, O’Mahony, Murphy et al, 2018) says there are three remaining ‘hard challenges’ for the AI and computer vision – dynamic camera calibration, 3D vision and sensor data fusion. Changing environmental conditions often mean that cameras have to be adjusted in real time to compensate for vibration, subject motion, lighting and range for example, and the use of AI for that is ongoing. Meanwhile, newly affordable 3D vision brings problems such as stereoscopic image matching for depth perception in low-contrast scenes, for example. Lastly, there is still much to be done to usefully fuse data from different sensors, such as cameras and Lidars, to enable ground vehicles for example to navigate unknown environments autonomously and reliably steer around static and dynamic obstacles in changing conditions. And that is a challenge even for humans. Now, here’s a thing “ ” August/September 2022 | Uncrewed Systems Technology Visual navigation is often treated as something to fall back on if GNSS cannot be relied upon, but its usefulness goes beyond that

RkJQdWJsaXNoZXIy MjI2Mzk4