Unmanned Systems Technology 021 | Robot Aviation FX450 l Imaging Sensors focus l UAVs Insight l Liquid-Piston X-Mini l Riptide l Eurosatory 2018 show report l Zipline l Electric Motors focus l ASTS show report

39 Imaging | Focus the output. This provides a signal which has a logarithmic response to the scene illumination. The response is extremely accurate and does not exhibit any saturation, as opposed to traditional sensors that integrate the current from the photodiode pixels. The logarithmic response can also separate the illumination from the contrast in an object, so the contrast is always preserved and kept constant whatever the illumination conditions. As this happens in a single exposure, no image analysis is required to drive the sensors, and no complex controls are needed. The logarithmic sensor can run with in- pixel differential mode as well. This can provide helpful data to extract in-scene movements, and is possible because there is no saturation or lag.  To further reduce the size and power consumption of image sensors, standard CMOS chip-making technology has been used to create stacked sensors with two and now three layers. A three-layer device that combines the sensor with memory and logic has reached 120 fps to overcome the challenges of capturing images quickly. Conventional CMOS image sensor chips collect the signal data from the pixels and send it through the logic circuit and out through the interface serially. This inherently restricts the speed to the output speed of the interface, which in turn means that the pixel reading is also capped at that speed. One way around this problem is to use a global shutter design that reads out data from each pixel simultaneously. Another way is to create faster serial links using ‘through silicon vias’ (TSVs) to connect chips directly. One sensor using this approach consists of an array of 19.3 million BI pixels on a pitch of 1.22 x 1.22 μm, built in a 90 nm process that can generate images at 120 fps without the problems of a rolling shutter. This is possible as the sensor array is linked to a 30 nm, 1 Gbit DRAM chip and 40 nm logic chip. The DRAM has three aluminium layers, and the BI pixels are built with five copper layers and one aluminium. Using different chips and different amounts of layers allows the designs to be optimised. These are all linked by the TSVs that are created from holes etched into the silicon and then filled with copper to create contacts. More than 35,000 TSVs with a diameter of 2.5 μm and a pitch of 6.3 μm are used to connect the layers. Fifteen thousand TSVs connect the pixel substrate and the DRAM substrate, and about 20,000 connect the DRAM substrate to the logic substrate. All the chips have been ground thin, so the total thickness of the combined sensor is 130 μm, the same as two layer sensors. By placing the DRAM in the middle, thinning it and developing narrow-pitched TSVs, the sensor’s developer was able to reduce the I/O area and power consumption. Image processing The latest image processors are being designed for 10 and 7 nm processes and with particular sensors in mind. Adding hardware support for neural networks alongside the sensor is allowing architectures to use localised processing. The higher performance that comes from this process allows the chip to run multiple algorithms simultaneously, deliver higher perception accuracy and reduce the total number of chips required in a camera subsystem. These support 8 MP resolution for object recognition and perception over long distances and with high accuracy. Some also include stereo vision processing that provides the ability to detect generic objects without training. They also have the performance to support the HDR from the sensor. Unmanned Systems Technology | August/September 2018 The 3000-OEM card handles image processing and object tracking inside the gimbal on a UAV (Courtesy of Sightline Applications) A ‘global shutter’ reads the charge from all the pixels at the same time, which is more suited to UAV video payload applications

RkJQdWJsaXNoZXIy MjI2Mzk4