Unmanned Systems Technology 028 | ecoSUB Robotics AUVs I ECUs focus I Space vehicles insight I AMZ Driverless gotthard I InterDrone 2019 report I ATI WAM 167-BB I Video systems focus I Aerdron HL4 Herculift

87 Video systems | Focus control the gyroscopes on a gimbal to keep the gimbal aligned with an area of interest selected by the operator. That gives a huge improvement in the stability of the system, and the operator can use the payload in real time with a full online video stream. For example, when the operator picks up an interesting point on screen at the ground station, the processing on the UAV can look back in the memory to find the object and extrapolate to find the object in real time, avoiding the problems of the 1.2 s of latency over a satellite link. One example is a surveillance mission to scan an area for a specific object such as a house or person. A traditional video system in a UAV gimbal with good optics would provide a compressed video stream to the ground station, allowing the operator to look for the target. However, that can be complicated for the operator, as there is noise and distortion on the screen. Instead, the system sends a series of still images to the ground station every 200 or 300 ms, and the operator can identify an object of interest to track. This can be specified before the mission even starts, so that a UAV can fly to an area, identify a specific target and send back the track of that target. This can be achieved using off-the- shelf software components that are then trained. Simulation software can be used to train the networks. This moves the design challenge from the video compression and the data link to the onboard processing hardware. The current assessment is that the processing required for this will be available in the next 12-18 months. This also changes the design of the hardware system. Between the image sensor and the video processing board is an FPGA that can process the raw data from the sensor to prepare the data stream for video processing, with functions such as scaling and image stabilisation. This pre-processing simplifies the processing on the main board and improves the accuracy of the CNN algorithms. Small and custom UAV manufacturers are split between using a closed, often proprietary video system and a radio transmitter/receiver with a relatively simple, built-in video encoder. Unfortunately these often have problems with delivering a stable video link to a ground station. A key trend is to use the extra Unmanned Systems Technology | October/November 2019 stabilisation, among other applications. CCI is a bidirectional, two-wire interface that host processors can use to configure and control cameras before, during or after image streaming using the high-speed MIPI D-PHY or MIPI C-PHY interfaces. CCI implementations can use I2C Fast Mode+, which supports up to 1 Mbit/s. When used with MIPI I3C v1.0 single data rate mode, the interface delivers data at 12.5 Mbit/s. It delivers 25 Mbit/s when used with MIPI v1.0 high data rate double data rate mode. The USL avoids the need for additional signal lines by encapsulating the CCI control data within CSI-2 transport. MIPI has developed a new physical layer specification to support speeds of 12 to 24 Gbit/s. MIPI A-PHY v1.0 is expected to be available to developers later this year. The specification will reduce the wiring, cost and weight requirements, as it allows high-speed data, control data and optional power share the same physical wiring. The first vehicles using components with the A-PHY technology are expected to be in production in 2024. The MIPI CSI-2 specification provides a high-speed connection from multiple cameras to a video compression board (Courtesy of Thine)

RkJQdWJsaXNoZXIy MjI2Mzk4