USE Network launch I UAV Works VALAQ l Cable harnesses l USVs insight l Xponential 2020 update l MARIN AUV l Suter Industries TOA 288 l Vitirover l AI systems l Vtrus ABI

94 Digest | Vtrus Autonomy Brain Implant inputs, four pairs of global shutter colour and depth sensing cameras are installed on the front, rear, top and bottom of the ABI, each providing an obstacle detection range of 0.6 to 5 m. Visual measurements are generally accurate to 1-3 mm on objects 1 m away. The ABI’s SLAM At the time of writing, the team at Vtrus had spent about two-and-a-half years developing advanced computer vision localisation systems, culminating in the SLAM system used for the ABI’s perception. While interfacing with the onboard cameras, the ABI’s SLAM software identifies features that fall into their fields of view, frame by frame, then tracks their movement relative to the vehicle by generating small 3D maps that become bigger and more detailed the more motion there is. There are several steps in this map generation. First, to interpolate changes in the poses between camera frames, an onboard IMU outputs acceleration and orientation data at 100 Hz. This is fused with the depth and colour information generated by the cameras, at 30 Hz, for final outputs of the ABI’s velocity, position and orientation (also at 30 Hz). CEO Renato Salas-Moreno says, “What is also critical to all this data fusion is proper time synchronisation, to orders of milliseconds between each camera exposure and IMU reading. “To achieve that, image signal processing was developed using custom hardware as well as software modules, to ensure the timestamps of images and IMU readings are both measured down to the microsecond.” After that, the perception and SLAM systems analyse and map the local space around the ABI head in 3D. “We segment all volumes into small virtual cubes, each roughly 10 cm 2 ,” Sanchez explains. “Then we classify each cube in the space around the robot as either occupied or unoccupied, giving the vehicle critical information about the obstacle landscape.” The resulting 3D map is fused with the velocity and timestamping data, then a collision-free path is plotted using a technique called MPPI (Model Predictive Path Integral) control, developed by Vtrus’ lead robotic scientist Grady Williams. This simulates thousands of feasible paths, then evaluates them according to criteria such as the distance to the goal and proximity to obstacles. The information resulting from this simulation (and the subsequent evaluation) is used to compute an optimal obstacle- free path. The process is repeated at a rate of about 50 Hz; this enables constant updates as obstacles change their positions relative to the vehicle. The evaluation criteria are encoded in a series of ‘cost functions’, which give scores or penalties to paths according to factors such as the probability of collisions occurring, the presence of unexplored spaces and the incidence of traffic. “What is quite interesting about using cost functions is that it offers future expandability to include custom costs that might be beneficial for particular customer applications,” Williams notes. “For example, if a particular UGV consumed a lot of energy while turning or pivoting, to the point that it was a concern, an end-user could insert a harsh cost for paths that involve pivoting,” he says. “That would force the MPPI software to find all the other optimal paths that limit turning and pivoting, to save battery power.” Also running in the background June/July 2020 | Unmanned Systems Technology Four pairs of cameras enable the colour and depth of the ABI’s surroundings to be measured and recorded Colour and depth information are timestamped to ensure accurate 3D and velocity measurements

RkJQdWJsaXNoZXIy MjI2Mzk4