Unmanned Systems Technology 026 I Tecdron TC800-FF I Propellers I USVs I AUVSI 2019 part 1 I Robby Moto UAVE I Singular Aircraft FlyOx I Teledyne SeaRaptor I Simulation & Testing I Ocean Business 2019 report

86 Focus | Simulation and testing the radar wave has different shapes. This can be exported to a specialist electromagnetic simulation tool to use the exact shape for validating the radar system’s performance. Other tools will simulate the wiring harness, for example. The harness is the central component of a vehicle, and is usually one of the first parts of the vehicle to be designed. The simulation models include the cable types, whether copper, optical or even wireless, as well as the data transfer. Most recently, these models have started to focus on optimising the network traffic from one processor to another within the vehicle. This assists in choosing between distributed or centralised controller and sensor fusion architectures to achieve the desired performance and cost levels. Raw sensor fusion with a single, large processor and cheap sensors can be tested against multiple processors with more expensive sensors and simpler wiring. Other tools then generate key performance indicators (KPIs) to determine the performance of a subsystem. These KPIs can be used by the OEM or a Tier One supplier to ensure that the module meets the overall requirements. Testing AI systems The emergence of autonomous systems takes away the operator (the driver), who took care of unexpected situations – also called corner cases in system verification and validation. Now the system designer has to demonstrate that all the possible scenarios the vehicle will face have been addressed, and this is strongly driving the use of simulation. This is increasingly difficult with machine learning (ML). There are two approaches to validating ML systems that are based on neural networks (NN) trained using millions of data points. Some OEMs use an end-to-end NN implementation, feeding in data to generate actuator outputs. This is regarded as a very risky approach though, as the system developer never knows when the system is fully trained – that is, has reached a specified level of safety. There is also the risk of false training via over-fitting to a data set, such as a specified safety level. That means the NN meets every test situation with a different specific pathway to meet the specification, but fails to generalise. The NN then becomes a huge virtual look-up table that cannot cope with unexpected situations. The more complex the NN, the larger the number of nodes, and the less easy it becomes to control this over-fitting. The other approach is to use physics-based models, which measure everything. That leads to massive amounts of operational data, which can be difficult to process. These physics-based synthetic models also carry the risk of missing data, as models tend to be ideal and similarly cannot cope well with fast-changing, dirty and, especially, chaotic systems. The consensus is that both approaches will be needed for effective verification and validation. That’s because it will be necessary to trace what has been tested to prove the developer did a thorough job – for example, out of the 10 million scenarios that were simulated, the fatal scenario was not part of the test suite. Machine learning can also be used to test the performance of a particular model. For example, a software tool can generate 1000 synthetic scenarios, run the models through those scenarios and measure the results against the critical KPIs. The ML algorithm can then be used to generate further variants of the scenarios to dig deeper into the controller, generating more variants to optimise particular parameters and then measuring the KPIs against recorded data. This has been integrated into the ISO 26262 safety-critical design process, but it means having to document millions of test scenarios, which is a massive data challenge. That has led other tool suppliers who have a long heritage in large data management to produce development and test environments that make these documents transparent and traceable. The simulation and test of vehicle-to- vehicle and vehicle-to-infrastructure (V2V and V2I, commonly called V2X) links is also a key element, and that has to correlate closely with the real-world systems. One challenge with simulating these links is the fidelity of the models. For example, not all wireless packets will arrive at the vehicle, so probabilistic models with limited quality are June/July 2019 | Unmanned Systems Technology The Constellation system combines two Xavier GPUs and two Turing accelerators generating 320 TOPS of processing power for simulation (Courtesy of Nvidia)

RkJQdWJsaXNoZXIy MjI2Mzk4