Unmanned Systems Technology 028 | ecoSUB Robotics AUVs I ECUs focus I Space vehicles insight I AMZ Driverless gotthard I InterDrone 2019 report I ATI WAM 167-BB I Video systems focus I Aerdron HL4 Herculift
84 Focus | Video systems Video compression card makers are now adding high-speed MIPI interfaces to support video at 4K resolutions. This is a result of the need for more signal lines with metadata, so all the parallel signals are combined in a few serial data lines in a serialiser. MIPI defines a serialiser/ deserialiser for camera systems. The MIPI serial link requires careful design, with attention to the impact of electrical noise on the signal lines. Noise from other parts of the system such as motors can disrupt the signals and limit the data rate, in turn limiting the quality of the video that can be carried. Video from an infrared sensor is particularly sensitive to this noise, and this design requirement is relatively new to encoder board makers. One of the problems facing designers is the complexity of such a link. A chipset simplifies the implantation of a high- speed serial link called MIPI CSI-2 to allow longer high-speed connections between a camera and encoder, up from 30 cm to 15 m. The chip serialises up to four lanes of MIPI CSI-2 signals from a camera sensor and converts it into one or two lanes using a proprietary protocol that supports up to 4 Gbit/s per lane. That is sufficient to transmit 1080p60 2 MP uncompressed video over 15 m, allowing the encoder to be further away from the camera. A companion chip deserialises the lanes back to the original MIPI CSI-2 signal. The USL sub-link aggregates bidirectional low-speed signals such as general-purpose I/O signals that might be carrying metadata from a navigation system. Separating out the high-speed lanes from the low-speed ones as general-purpose I/O makes it easier to build and test the system. A chip allows a duplicate of the video signal to be sent to onboard storage to keep the original feed while the video for transmission is manipulated. Metadata A key consideration for the video is the metadata. This can be as important as the video, providing the time and location of a frame, pulling the location data from the navigation system. For defence systems, this is defined by Stanag 4609 for the types of the data required. That allows the metadata to be used by a wide range of ground stations, rather than tying a video encoder to a particular one. A lower bit rate for the video means there is more room for metadata, but it has to be carefully synchronised with the video feed. The SMTPE 292 standard defines the format of the uncompressed video signal from an image sensor in a turret, and specifies the position in the metadata in the vertical synchronisation carrier of the video. This metadata is extracted, the video compressed and the metadata is October/November 2019 | Unmanned Systems Technology The Harrier embedded processing board uses an Intel E39xx Atom processor with an integrated Intel HD Graphics 505 core that supports UHD 4K video with a 9 W power consumption (Courtesy of Versalogic) A key consideration for the video is the metadata, to provide a frame’s time and location, pulling the location data from the navigation system UAVOS has developed a gyro-stabilized three-axis gimbal with an integrated RGB camera, full HD resolution and 30x optical zoom, and is looking at how to use machine learning to reduce the output’s bandwidth (Courtesy of UAVOS)
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4