Unmanned Systems Technology 018 | CES show report | ASV Global C-Cat 3 USV | Test centres | UUVs insight | Limbach L 275 EF | Lidar systems | Heliceo DroneBox | Composites

74 the frequency of the laser, similar to a 2D digital camera sensor but with the added capability of sensing 3D depth and intensity. Each pixel records the time a laser pulse takes to travel to a scene and bounce back to the camera. The reflected photons are collected by the array of pixel sensors, and each one samples the incoming photons and provides 3D depth and 2D location data as well as the strength of the reflection. Twenty or 44 analogue samples are typically captured for each pixel from each laser per pulse, allowing for accurate pulse profiling. The 16,384 data points per frame allows for high-rate dynamic scene capture. The technology has a number of advantages over conventional point single-pixel sensors as it enables capture of a full frame in a single flash. The design is also more rugged as it has no moving mirrors, and the data can be combined with a 2D camera sensors, including infrared sensors. Another major advantage is that it allows 3D movies to be acquired at the laser pulse repetition frequency, enabling real-time machine vision. The high frame rates mean mapping can be obtained more rapidly than with point-scan technology, cutting the amount of flight time needed to scan and capture an area. These single laser pulse Flash 3D images are also generally immune to platform motion, platform vibration and object motion owing to the speed-of-light capture of the data frame. Back-end processing Some of the complex array processing is common between micro-mirror, laser array and 3D Flash architectures, but providing back-end processing that can handle them all is a major challenge. It can be done though by using time-of- flight measurements with pulses from an infrared laser to provide continuous, rapid and accurate detection and ranging, covering narrow as well as wide beams. The back-end processor can be integrated into modules that use different laser sources, receivers and optics but still be driven by the same processor system. This allows system designers to use modules with different sensors for different applications. For example, there can be a scanning beam for long-range detection of objects and a flash array for short-range detection of pedestrians, but still use the same configuration software and back- end processing, dramatically simplifying the development process. This approach also allows the sensing elements to be easily and rapidly integrated into existing systems such as intelligent LED headlights without having to add a new module. This February/March 2018 | Unmanned Systems Technology AEye’s Idar sensor is mounted on a vehicle for testing. It combines a micro-machined mirror with a low-light camera and embedded AI with software-definable hardware (Courtesy of AEye)

RkJQdWJsaXNoZXIy MjI2Mzk4