Issue 40 Unmanned Systems Technology October/November 2021 ANYbotics ANYmal D l AI systems focus l Aquatic Drones Phoenix 5 l Space vehicles insight l Sky Eye Rapier X-25 l FlyingBasket FB3 l GCS focus l AUVSI Xponential 2021
7 Platform one Unmanned Systems Technology | October/November 2021 Outsight has developed a pre- processing system to allow developers of autonomous systems to add any type of Lidar sensor (writes Nick Flaherty). The Augmented Lidar Box (ALB) is a real-time software engine that overcomes the complexity of using RAW 3D data, so any application developer or integrator can efficiently use Lidar in its own solutions without needing to become a 3D Lidar expert. It is a plug-and-play unit that provides a full set of fundamental features that are commonly required in almost every application such as localisation and mapping, 3D SLAM, object ID and tracking, segmentation and classification and other functions. It is based on an ARM processor, and while it uses machine learning, it does not rely on training or annotation, or need a machine learning accelerator. The software has several modules for processing the data from applications such as the localisation and SLAM, object detection and tracking and positioning. The edge computing layer takes care of correctly powering the Lidar. Different Lidar models require different voltages, and these can be set either by an API command or using the software tool, or configured by default for certain Lidars. The way the ALB processes the data from Lidar sensors is based on the concept of the frame, in a similar way to a camera sensor. As the Lidar sensor moves, each frame contains a different perspective and information about the scene from the previous one. The processing is typically done at 20 frames per second and the frame is kept for the ALB output, even when using Lidars that have no repeatable patterns. The SLAM-on-chip processing layer associates consecutive frames from the Lidar with each other and computes its orientation. Ego-Motion is the main output of the SLAM feature, and defines the position and orientation of the Lidar relative to the previous frame. As a Lidar typically provides data at a constant frame rate, this can be seen as a velocity vector, essentially the metres per time between frames. The Ego-Motion output provides the relative position between frames. As the Ego-Motion contains the position, the trajectory followed by the Lidar over N frames can be obtained by plotting the N positions from the first selected frame. This is useful for example for feeding the output into software that can only use positions as an input. The Ego-Motion information is typically used in vehicle control loops where the relevant information is the instantaneous movement or velocity, but it can also be used for mapping. From the Ego-Motion feature the ALB knows how the sensor has moved (both in translation and orientation) from the previous frame. That allows it to accumulate the points from frame to frame in a consistent point-cloud over time. A super-resolution frame is then created, which includes all the points from the frame where the feature was triggered up to the current one. This increased resolution can provide details of the scene that could be missed when looking at it in a frame-per-frame manner. The localisation feature provides the position and orientation of the Lidar relative to the origin of a selected reference coordinate system, not only the x , y and z position but also the orientation on each axis. This is typically used in mobile robotics applications, where there is an interest in having an absolute orientation. Then there is the relocalisation module. This uses a reference map, which is a specific point-cloud map that contains only meaningful information required for the relocalisation. For example, in a security mobile robot application, a reference map would encompass all the potential locations where the robot might be authorised to go, and will contain typically fewer than 5% of the points of the equivalent point-cloud map in full. Starting from an initial pose, a specific internal algorithm fine-tunes the position and orientation to help with mobile robot localisation and smart mapping. Object detection and tracking delivers the moving objects detected in the Lidar’s field of view and assigns a persistent ID over the multiple frames as long as the objects are considered to be still present in the viewed scene. Sensors Plug-and-play Lidar unit The Thalamus autonomous security robot uses the Outsight Augmented Lidar Box to perform obstacle avoidance with trajectory planning
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4