Unmanned Systems Technology 018 | CES show report | ASV Global C-Cat 3 USV | Test centres | UUVs insight | Limbach L 275 EF | Lidar systems | Heliceo DroneBox | Composites

72 Focus | Lidar data produced by the sensor to around 5% of that for a traditional point cloud. In a traditional point cloud, providing a longer range requires a trade-off between frame rate and resolution, or both. For example, a 64-line Lidar system can hit an object once per frame (every 100 ms), but the localised scan allows the sensor to selectively revisit any chosen object twice within 30 µs. The mirror can also provide a wide area scan at the same time to make sure no other objects of interest are missed. This provides an output in the same format used by existing Lidar systems. The choice of laser can be a key point here. Many commercial laser diodes operate in the 405-850 nm bands with a constant power output for each pulse, but one system, widely used for telecoms fibre networks, uses a laser operating in the 1550 nm band. Commercial fibre lasers have an advantage over the diode alternative in that the longer the time between pulses, the larger the maximum power can be. As the Lidar scan is selective and therefore not uniform there can be a time difference between pulses. A longer gap allows for a longer range. The software-defined control architecture allows the time between shots to be modified to illuminate difficult targets at a longer range with a higher energy per pulse to give a range of up to 300 m. Using a camera also provides additional data for the Lidar scan, allowing an overlay of colour on top of the 3D point cloud. That could be used to identify street signs more accurately, for example, which Lidar cannot detect. The Lidar and camera data is combined with an image identification neural network that is linked to software- definable hardware, enabling on-the-fly trade-offs between frame rate, range and resolution for each scene. There are distinct modes for different applications, such as city driving versus on a highway, allowing the sensor to emphasise range over resolution for highway driving, with reduced range for city driving. This is implemented by look- up tables of configurable parameters in a field programmable gate array. Arrays One supplier uses a patented, high- density array of vertical laser diodes, or VCSELs, that operate at 940 nm to provide the illumination for a wide range of sensor architectures, from time-of-flight and Flash to structured-light applications. Structured light uses a scanning beam with a particular structure to improve the detection of objects. The VCSELs are built in a silicon chip process, growing the lasers vertically so that the processing can be on the opposite side of the wafer. Silicon lenses are grown over the top of them to reduce the cost of the optics and increase the reliability. That allows a ‘flip-chip’ architecture, where the control of the pixels is on the opposite side of the silicon chip from the laser output, enabling each VCSEL or a group of VCSELs to be controlled independently. That in turn means the strength and phase of each group can be used to steer the beam and, depending on the field of view, no external lens is needed. A sensor with an array of 256 channels has been demonstrated using this approach. 3D Flash 3D Flash Lidar sensors operate differently from the scanning systems, and are more similar to 2D digital cameras. They use a single burst of laser light and an array of highly sensitive pixel sensors tuned to February/March 2018 | Unmanned Systems Technology Velodyne’s Velarray Lidar sensor uses proprietary ASICs and a micro-mirror for a sensor that can be embedded into the front, sides and corners of vehicles (Courtesy of Velodyne)

RkJQdWJsaXNoZXIy MjI2Mzk4