USE Network launch I UAV Works VALAQ l Cable harnesses l USVs insight l Xponential 2020 update l MARIN AUV l Suter Industries TOA 288 l Vitirover l AI systems l Vtrus ABI

84 Focus | AI systems to sensor fusion, and so sees a natural progression to AI decision-making systems. The MAC blocks are all tightly coupled, and can therefore process a complete image pipeline in 30 ms, even when the neural network element takes 10 ms to process a frame. The largest FPGA devices can handle 12 cameras and five Lidars along with a DNN inference engine. However, the FPGA can also be easily reprogrammed, allowing the framework to be continually changed and improved, then downloaded to the FPGA. For example, outlying parts of the network can be removed and the framework tested to see if that makes any difference to the accuracy of the final result in the real world. Once the framework is fixed, the whole design can be approved as part of a safety-critical design process such as ISO 26262. This approach also allows a neural network framework to be instrumented. A DNN consists of multiple layers of processing that are not visible in a GPU. An FPGA can include non-invasive hardware lines to key MAC blocks that can give visibility of the different layers to help validate the performance of the network. These instrumentation links can be removed later to fully optimise the final design. The next stage is supporting ‘over the air’ updates, where the FPGA can be reprogrammed with a revised framework, for example to improve the performance of the Lidar sensor. There is the issue of how the new framework is validated in an existing design, but that work is underway. Many new chip architectures and tools are also being developed to convert the trained models into power- efficient hardware. One way to simplify the chip architecture is to add hardware routing of the data signals and dependencies, and minimise the need to go out to memory, but that requires a new set of software tools. As a result, these tools – essentially a hardware compiler – have to be able to understand the requirements of a wide range of neural network frameworks, and so are as complex as the chip designs. The compiler constructs a data flow for the array, known as a meta-map, that supports multiple frameworks concurrently on a single chip or an array of chips, giving scalability. At runtime the meta-map is passed to the hardware to run in a simple state machine that in real time can switch the context and flow based on the incoming data, and a hardware block calls in the June/July 2020 | Unmanned Systems Technology Field programmable gate arrays are being used to run inference engines for machine learning in automotive designs such as this board from ZF (Courtesy of Xilinx) Software and hardware are increasingly interconnected for AI designs (Courtesy of AImotive) Chip architecture can be simplified by adding hardware routing of data signals, and minimising the need to go out to memory

RkJQdWJsaXNoZXIy MjI2Mzk4