UST035

85 Bonded wafers Bonded wafers allow two (or more) wafers to be joined together to form a stack. The stack can be used to separate parts of the pixel to improve the HDR or to add more logic for processing. This is being used for image processing, to add redundancy and checking, and as an encryption engine to protect the data. All sensors are currently two-stack layers, mostly on 12 in (300 mm) CMOS wafers from several chip-making global foundries in a well-established process. Sensor makers say this will move to three layers at some point if more integration is required to add more functions and logic. This is set to be more important for adding machine learning processing closer to the raw data. Another method is to change the fundamental approach to how the sensor works. Event-driven sensors built on the same leading-edge, stacked silicon chip technology are emerging to improve the speed of sensing and reduce data rates. This applies as much to UAVs in airborne applications as vehicles on the ground. HDR The demands for greater HDR are increasing. Four years ago, the target was a dynamic range of 120 dB; by the time these image sensors were in production, the demand was for 140 db. These 140 dB devices are now available, but requests from system developers are for even higher dynamic range. Increasing the dynamic range is a balance between reducing a sensor’s dark current and saturation, and increasing the sensitivity of the pixels. The dark current is the current generated by the sensor in the absence of light – essentially the noise ‘floor’. It determines the sensor’s low-light performance, allowing it to detect objects using only a few photons. Increasing the size of the pixel increases the area available and thus the sensitivity of detection, but it also increases the dark current. Also, larger pixels are saturated more easily by bright lights. It is this difference between the dark current and the saturation current that determines the dynamic range. Dark current also increases with temperature, so there is a change in the HDR across a temperature range that requires compensation. One way to achieve that is to use ‘deep well’ pixel technology, which was developed for sensors used in cinema cameras and adapted for automotive sensors. It can accommodate an overflow of electrons from the pixel in bright light to manage both the low light and the HDR in the same structure. Capacitance in the pixel can store extra electrons generated by the photons to avoid saturating the pixel. The capacitance is created by the stack structure as part of the manufacturing process to allow the pixel to be as large as possible for the sensitivity. Another way to increase the sensitivity of a pixel is to increase its quantum efficiency (QE), which varies with the frequencies of light used. Using the NIR bands with an optimised pixel design provides a QE of up to 40% – four times higher than traditional pixel designs with visible light, at 850 nm. The technique, called Nyxel, was developed to provide higher image quality in surveillance applications, and it is combined with backside illumination. Part of the 2.2 µm pixel is on the top wafer, with the other part on the bottom. That shields the photodiodes, so there’s no parasitic light leaking into the pixel. Each pixel is also isolated with a trench to further boost the QE. That means less illumination is needed, reducing the number of LEDs that Image sensors | Focus Unmanned Systems Technology | December/January 2021 Stacking the pixel array and the processing gives benefits for high dynamic range (Courtesy of ON Semiconductor) Increasing the quantum efficiency of a pixel, which varies with the frequencies of light used, raises its sensitivity

RkJQdWJsaXNoZXIy MjI2Mzk4