Uncrewed Systems Technology 044 l Xer Technolgies X12 and X8 l Lidar sensors l Stan UGV l USVs insight l AUVSI Xponential 2022 l Cobra Aero A99H l Accession Class USV l Connectors I Oceanology International 2022
44 Focus | Lidar sensors than milliseconds. They are controlled by typical liquid crystal voltages with commercial off-the-shelf LCD drivers. This also defines the pixel size, which allows a field of view up to 170 º . The size of the array, typically 1 sq in, aligns with the optical aperture and lens of the Lidar to determine the amount of light energy that emerges from the sensor. The fact that the metamaterial is reflective rather than transmissive allows all the laser light to be channelled out of the aperture. The higher the energy, the longer the range, and this approach can support a range of up to 300 m. A reference design for a Lidar sensor with a VCSEL and ToF image sensor uses the metamaterial beam steering to provide a field of view of 120 x 90 º with a range of 10 m in a sensor measuring 1 cm 3 for applications such as UGVs and mobile robotics for close-range sensing. The target for this Lidar sensor in volume production is $10, allowing multiple sensors around an uncrewed system. However, the field of view is software- programmable, allowing a range of more than 200 m with a 20-30 º field of view, or a 50-100 m range for one of 120 º in a larger unit to allow for larger optics. The software-defined capability allows the Lidar to scan only to the angles of interest more frequently – the beam can hop around and focus where necessary. The processing is via an API to define the angle and frame rate, and a scan controller takes care of all the details. The 905, 940 and 1550 nm lasers need slightly different structures in the metamaterial to tune the laser signal, as well as for different types of sensors. Combining Lidar sensors One growing trend is to combine different Lidar sensors in one system. This can be to provide different levels of performance, for example with sensors optimised for long range and narrow field of view combined with those for a wide field of view. It can also be used to avoid problems in the supply chain, as being able to swap easily between different Lidar sensors in the same system can avoid production problems if some are in short supply. The Lidar sensors can be combined with a pre-processing engine that connects to the output of the sensor to analyse the point cloud output. The engine provides a standard data format for the central processor to allow multiple sensors to work together, or to reduce the bandwidth of the data feed. The bandwidth for a wireless network link from a UAV can be reduced by a factor of 50 just by sending the relevant data rather than the entire point cloud. All the data is stored on the Lidar sensors in the UAV’s payload, but providing real- time data allows an operator to assess the quality of the data capture as it comes in and adjust the flight path of the UAV if necessary. These decisions could also be made autonomously by the UAV’s central processor. Determining the relevant data and ensuring a common data format requires sophisticated algorithms. These have been run on ARM processor cores in a dedicated hardware unit, but are now being integrated into other hardware. This requires software virtualisation technology to provide real-time processing on ARM, x86 and RISC-V processors. Combining mobile and static Lidar The pre-processing capability and common data format also allows data to be combined with static Lidar sensors. This approach is considered too complex to be able to use a synchronisation clock for all the data from multiple Lidar sensors, so instead the physical space is used to coordinate the data. This takes the feeds from each sensor and identifies areas that overlap, along with position data, allowing a broad point cloud to be constructed. Conclusion Lidar sensors continue to balance the demands of range, size and power consumption. Designs are reducing the power consumption of the modulator and increasing its reliability, whether through vibration control or the use of metamaterials, and are achieving ranges of up to 300 m for driverless cars. New techniques from the medical world are also improving sensor performance, while new manufacturing techniques are helping to boost their reliability and reduce costs. With the wider range of different Lidar technologies being developed for uncrewed systems, so pre-processing algorithms can combine the data from different sensors, providing system designers with the ability to use the best sensor for a particular application. Acknowledgements The author would like to thank Raul Bravo at Outsight, Gleb Akselrod at Lumotive and Han Woong Yoo at TU Wien for their help with researching this article. June/July 2022 | Uncrewed Systems Technology Using a pre-processing engine allows data from different types of Lidar sensor to be combined (Courtesy of Outsight)
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4