UST035
88 Achieving a higher HDR in a rolling shutter design without saturating the pixels is also a challenge. An HDR of 140 db can be achieved using multiple exposures of each line, which is performed in a staged manner with three rolling shutters. A row is first exposed (T1) and is then read out. Its values are stored in a T1 delay buffer. Immediately after that the row is shuttered and exposed for the second T2, then read out, and these values are stored in T2 delay buffer. It is then exposed similarly for T3 and read out. After that the T1 and T2 from the buffer and the T3 pixel values are processed together. The trade-off for the shutter design is between size, weight and power. Image sensors do not need to operate at a frame rate of 30 or 60 fps, as the rate is determined by the flicker seen by the human eye. Instead the rate for reading out the data is determined by the post- processing logic, other sensors and the algorithms used, as well as the frequency of changes present in the environment the image captures. For example, a 100 Hz event needs a more than 100 Hz frame capture rate if it is to be recognised and correctly tracked using this method. If a radar system is also used to provide long-range object detection, the range of detection of the image sensor may be lower. This is then balanced with the sensor’s frame rate, as there would be more time to identify an object when it is closer. With a rolling shutter this reduces the power requirement and so reduces the size and weight of the sensor system. Sensor makers are now also looking at integrating image sensor processors (ISPs) into the sensor stack to provide more data processing, rather than sending the raw data stream to an ECU for processing. This would then allow just the key points of the image to be sent to the ECU, reducing the amount of data that has to be sent and the power consumption. However, the raw data would have to be stored in a ‘black box’ in case there was a problem, but this could be done at a slower rate to reduce power consumption. This also has an impact on the computer vision algorithms used to process the image. Machine learning frameworks for example are trained using data in a particular format. One such framework that is popular for machine vision, YOLOv3, uses a rectangular image that assumes a global shutter. Other frameworks could use the row output of a rolling shutter. Temperature As the temperature rises, the image sensor generates more noise – the dark current. The more noise that’s created, the harder it is to see in the dark. The key to using sensors at high temperatures therefore is how much the noise can be reduced. This draws on experience from designs using an older technology, the CCD (charge coupled device) to combine HDR and LFM in a 5.4 MP device with 120 dB single-exposure HDR using additional processing on the second wafer in the stack. This means highlight oversaturation can be mitigated, even in situations where 100,000 lux sunlight is directly reflecting off a light-coloured car in front, capturing the subject more accurately even under road conditions where there is a dramatic lighting contrast, such as when entering or exiting a tunnel. It also operates down to a luminance 0.1 lux, the equivalent of moonlight. Safety For designs to the ISO 26262 safety standard, functional safety elements can be added to the image sensor array. This is equivalent to embedded memory with checksums, adding cells to make sure the information being stored is correct, and adding information to the rows and columns to the array. This can also include adding extra rows of pixels December/January 2021 | Unmanned Systems Technology Focus | Image sensors Sensor makers are looking at integrating image processing sensors into the sensor stack to provide more data processing Adding security such as encryption is increasingly important for sensors in unmanned systems (Courtesy of Omnivision)
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4