UST035

91 Image sensors | Focus deliver high-precision, real-time tracking of objects while recording video. Users can write the AI models of their choice to the embedded memory and then rewrite and update it according to its requirements or the conditions of the location where the system is being used, particularly for different types of UAV missions. Event-driven sensors Another approach to using machine learning for image processing on a chip is an event-driven architecture. Rather than reading out a row or array on each clock signal, the event-driven architecture is asynchronous and only triggers data if there is a change in the scene. Each pixel is independent and asynchronous, and monitors the scene and reacts to changes in contrast. Very often this is due to movement in a scene, and generates an event that carries the x and y positions of the pixel, the time of the change and the size of the change. Analogue processing of the changes in the voltage from each pixel in the array is embedded underneath the pixel in a stack. A fourth-generation sensor using this approach is planned, based on a 36 nm CMOS process for a 1 MP sensor with two stacked wafers, one with the pixels on a 4.86 µm pitch and the other housing the processing. That makes the die 10 times smaller than the previous generation, and gives a fill factor of nearly 100%. The power consumption is measured differently, as the processing is only required when the data in the array changes, rather than reading out rows of data multiple times a second. This means the image from a fully static scene consumes less than 5 mW and high activity is 10-20 mW, compared to 100-200 mW for a frame-based video detection system running at 30 fps. However, this requires new types of machine learning models that use an event-based approach rather than a traditional rectangular image. That means different software libraries are needed to assist developers, and these have to be developed in parallel with the image sensor. Hyperspectral sensors Hyperspectral sensors are popular image sensors for UAVs, as they cover a broad spectrum of light. They use a linear sensor called a push broom and imaging systems with a narrow slit to acquire the data on a focal plane array (FPA). The push broom uses a single line of pixels arranged perpendicularly to the direction of movement. An image is collected one line at a time, with all the pixels in a line being measured simultaneously; essentially it’s a reduced version of a shutter. The wavelength information is captured over a broad range, hence the name ‘hyperspectral’. It covers the ultraviolet to visible spectrum, visible to near infrared (VNIR), NIR or short-wave infrared (SWIR) using FPA technologies such as CMOS, InGaAs or mercury cadmium telluride (MCT). One drawback with push brooms though is that the pixels in the detector have varying sensitivity across the wide range of frequencies being scanned. If they are not perfectly calibrated, this can result in stripes in the data that cause artefacts in the hyperspectral image. To get the broad-spectrum sensing, the push broom sensor is combined with a holographic diffraction grating, which is used in all-reflective designs to spread out the spectrum of the light over a 2D FPA. The light rays enter the lens, pass through the slit, spread out inside the spectrograph, and then land on the FPA. There is a complex interplay here in the sensor design. Bigger pixels can mean more light-gathering ability but Unmanned Systems Technology | December/January 2021 Various linear sensors are used in hyperspectral camera designs (Courtesy of Headwall Photonics) Rather than read out a row of pixels on each clock signal, the event-driven architecture only triggers data if there is a change in the scene

RkJQdWJsaXNoZXIy MjI2Mzk4