Unmanned Systems Technology 005 | Selex ES Falco UAV | Sense and avoid systems | RCV Engines DF70 | DSEI show report | Fuel cells | CUAV Expo, InterDrone and CUAV Show reports | SLAM

78 Insight | Simultaneous localisation and mapping of the vehicle’s location. This means that even though the absolute positions of the landmarks may be uncertain, their positions relative to each other are known quite accurately. And the more these correlations grow with new observations from different locations, the more accurate the solution. Then, by effectively turning the problem on its head and assuming it knows with certainty where the landmarks are, the SLAM algorithm can produce increasingly accurate estimates of where the vehicle is in relation to them. This is another aspect of Bayesian logic, in that it can predict a system state, such as a vehicle position, from an observation and predict that it will make a particular observation based on a ‘known’ system state. For example, “If I can see that tree 10 m that way then I must be here”, or “If I am here then I should be able to see that tree 10 m that way”. This is a prime example of an ability that comes naturally and easily to humans but not to machines. In 2006, robotics scientists Hugh Durrant- Whyte and Tim Baily wrote, “The most important insight into SLAM was to realise that the correlations between landmark estimates increase monotonically as more and more observations are made. Practically, this means that knowledge of the relative location of landmarks always improves and never diverges, regardless of robot motion.” Refining the Kalman filter An algorithm such as a Kalman filter- based SLAM algorithm can be thought of as a mathematical model of how some portion of the real world works and, like all models, it has to make some simplifying assumptions, which are embodied in its main components. In a Kalman filter, these include a matrix that describes how the autonomous system’s pose changes over time if it is not deliberately altered by a control input or affected by ‘noise’. (For example, a UGV sitting on a flat, stable surface won’t go anywhere, but a helicopter might be blown by the wind.) A second matrix – the motion model – describes how control inputs change the system state, while a third – the observation model – describes how to map the state to an observation, or predict an observation from a known state. Also included are random variables representing likely process and measurement errors or noise. Crucially, Kalman filters assume that the measurement errors they must account for and the distribution Dec 2015/Jan 2016 | Unmanned Systems Technology A key insight into SLAM was that knowing the relative location of landmarks always improves and never diverges, regardless of robot motion High-resolution imaging sonars enable underwater vehicles to build very detailed 3D maps of their surroundings (Courtesy of Teledyne BlueView)

RkJQdWJsaXNoZXIy MjI2Mzk4