UST035
An event-driven sensor developed by Prophesee is being used in a camera from Century Arks (Courtesy of Century Arks) 90 to provide redundancy in case there are failures of pixels over time in the array. All this provides additional data to ensure that the main image data is correct or provides an error code. These additional circuits reduce the fill factor, reducing the sensitivity but providing a mixture of baseline measurements and redundancy. This provides sufficient capability for systems that are certified to the ASIL-B standard. ASIL-C systems require more information, called a Safety Element out of Context, which is often used for the software libraries working with the sensors. For the highest safety level, ASIL-D, the image sensor is combined with other sensors such as Lidar or radar to examine the same scene, to provide redundancy with multiple sensing modes. Security Security is also an increasingly important element of sensor design. A stacked sensor allows more logic functions to be added to the sensor, and these include secure boot and encryption. Secure boot allows developers to confirm that the sensor logic has not been compromised when it starts up, while the encryption engine using the AES standard ensures that data cannot be changed unnoticed between the sensor and the ECU or sensor fusion processor. These features allow a developer to confirm that the image from the sensor is valid from a safety and security perspective. A typical image sensor development cycle is 2 to 3 years, but it takes longer than other applications for the sensors actually to be integrated into cars in the market. The negotiation between the sensor developer and the system designer over specifications means it can take 5 years for a new sensor to be used in a vehicle. As autonomous driving becomes widespread in the future, cameras at the front, sides, back and inside the car are proposed, creating a cocoon of shorter range sensors using rolling shutters. Machine learning integration One device combines a 12.3 MP sensor on top of a machine learning logic digital signal processing chip in a stack. The sensor outputs metadata (semantic information belonging to the image data) instead of image information, reducing the volume of data and addressing privacy concerns. Moreover, the AI capability makes it possible to deliver real-time object tracking. Different AI models can also be chosen by rewriting internal memory. The signals acquired by the pixel chip are run through an ISP, and AI processing is carried out in the process stage on the logic chip. The extracted information is output as metadata, which reduces the amount of data being handled. Ensuring that image information is not output helps to reduce security risks and address privacy concerns. As well as the image recorded by the conventional image sensor, users can select the data output format according to their needs and uses, including ISP-format output images (YUV/RGB) and ROI (Region of Interest) specific images. For example, an ISP embedded behind the pixel array enables high- speed AI processing within 3.1 ms for the MobileNet V1 AI framework, completing the entire process in a single video frame. This design makes it possible to December/January 2021 | Unmanned Systems Technology A stacked sensor design allows more logic functions, such as secure boot and encryption, to be added to the sensor
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4