Unmanned Systems Technology 021 | Robot Aviation FX450 l Imaging Sensors focus l UAVs Insight l Liquid-Piston X-Mini l Riptide l Eurosatory 2018 show report l Zipline l Electric Motors focus l ASTS show report
38 produce more data, leading to diminishing returns as more circuitry is need to handle that extra data, using more power and creating more complexity. Most of the automotive designs also use a ‘rolling shutter’, where the current from each pixel is output sequentially. This can lead to warping of the image, but it’s compensated for by the image processing subsystem. However, it means the subsystem has to be tuned to a particular type of sensor. In contrast, a ‘global shutter’ reads the charge from all the pixels at the same time, which is more suited to the video payload applications in UAVs. This requires more sophisticated processing on the sensor chip though, and uses more power. The current discussion is whether global shutter technology will have to be used for fully autonomous (Level 5) systems or if the algorithms can work effectively with rolling shutter sensors. Because the reading speed can only be increased up to 30 frames/second (fps), the difference in read time from the first pixel to the last results in a ‘rolling’ shutter effect. For example, with a moving object in the picture, the object may have moved some distance between the time the first pixel is read out and the last. This has a major impact on the machine learning algorithms in autonomous systems. Another theme is the need for vision at night, either through more sensitive CMOS sensors or thermal sensors. Tweaking the pixel structure of a CMOS sensor and coupling that with improved image processing algorithms means that at night the cameras’ parameters could be set differently to account for less light. But thermal sensors with micro- bolometers to measure the thermal energy are also being adopted. These are currently separate modules but are expected to be added alongside the electro-optic (EO) CMOS sensors. Part of the challenge is the heat generated by the EO cameras and image subsystem. One module that is already in production for driver assistance uses a windscreen-based camera with radar, and was mounted there because the processing subsystem can generate 3-4 W of heat. One way to address the heat issue is via the control circuits on the sensor. For example, a 4 MP sensor will typically use 120 mW of power at 30 fps. Power management in the additional circuitry around the sensor can reduce this to 8 mW at 1 fps in a monitoring mode. In monitoring mode the sensor is monitoring the scene, and if it sees a relevant object it ramps up to 30 fps. However, this needs tight coupling with the image identification and tracking algorithms in the processing subsystem. A bigger problem for the car companies though is the wire harness. This is the first part of the vehicle to be laid out, and car makers are currently working on the harness design for vehicles in 2025. That will force the sensor and subsystem makers to adapt to the harness. The biggest challenge for the image sensor in a car is reversing. When the car is close to a wall, the LED rear lamps confuse the sensor. Optimising the pixel array is also not easy, what with LED headlights and road signs operating at different frequencies. The sensor subsystem has to be able to work with cars from all the suppliers, and this is still a challenge for designers. One sensor maker is producing an HDR global shutter CMOS sensor with a new light-flicker mitigation mode and a 12-bit digital output specifically for automotive designs. The 1280 x 1024 resolution comes from pixels on a 6.8 µm pitch, and the analogue-to-digital converter is integrated on the chip. A dynamic range of 140 dB comes from a proprietary pixel design that operates in voltage domain as opposed to current domain like traditional sensors. It can be used to design and build a variety of imaging sensors, operating in various wavelengths – visible, IR and UV. A logarithmic sensor allows a single exposure to provide the 140 dB without saturation. This gives a robust and modular design, as image sensing and processing can be completely separate. Logarithmic sensing is contrast- based, so it produces a near-constant contrast whatever the level and variations in illumination. This enables tracking algorithms to run reliably with illumination changes, for example in and out of tunnels, and with or without direct sunlight. That helps reduce the number of road trials necessary to capture all the possible conditions that might confuse the sensor and processing. Every pixel of the sensor operates as a single solar cell, where the voltage generated by the photons is sensed by a low-noise readout circuit to provide August/September 2018 | Unmanned Systems Technology A three-layer stacked sensor combines a backside illuminated pixel array with DRAM and logic (Courtesy of Sony Semiconductor)
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4