Unmanned Systems Technology Dec/Jan 2020 | Phoenix UAS | Sonar focus | Construction insight | InterGeo 2019 | Supacat ATMP | Adelan fuel cell | Oregon tour | DSEI 2019 | Copperstone Helix | Power management focus
Researchers at the University of Illinois at Urbana-Champaign have developed a technique to inject faults into machine learning algorithms to improve the safety of autonomous technology (writes Nick Flaherty). The controllers running neural networks typically combine 50 processors and accelerators running more than 100 million lines of code to support computer vision, planning and other machine learning tasks. Testing these systems to demonstrate they are safe is a major challenge. The researchers’ DriveFI technique is a machine learning-based fault injection engine that can mine situations and faults that have an impact on the safety of hardware and software in an autonomous vehicle design. This has been demonstrated on two industry- grade technology stacks from Nvidia and Baidu. When testing the Apollo stack from Baidu in simulation, DriveFI found 500 examples of when the software failed to handle an issue and the failure led to an accident. These safety-critical faults were detected in less than 4 h. In comparison, random injection experiments executed over several weeks could not find any safety-critical faults. The software found 61 faults in the Nvidia stack. To build the test system, the research group analysed all the safety reports submitted from 2014-17 in the US, covering 144 autonomous vehicles driving a total of 1.116 million miles. They found that for the same number of miles driven, human-driven cars were up to 4000 times less likely to have an accident than the driverless cars. Errors in the software and hardware stacks manifest as safety-critical issues only in certain driving scenarios, so tests performed on motorways or empty streets may not be enough, as safety violations under software/hardware faults are rare. The team therefore injected errors into the software and hardware stack of the autonomous vehicles in computer simulations, then collected data on the vehicles’ responses to these problems. That provided data to teach the neural network software to take the right action in the face of software or hardware problems. Neural net safety tester Control software Micro Link Trusted by Pilots. Innovated for Drones. The Smallest BVLOS C2 Data Link Built to Aviation Standards Designed for Precision - Achieved Through Experience Learn More at uAvionix.com/MLTrust 26mm 31.31mm
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4