Unmanned Systems Technology 024 | Wingcopter 178 l 5G focus l UUVs insight l CES report l Stromkind KAT l Intelligent Energy fuel cell l Earthsense TerraSentia l Connectors focus l Advanced Engineering report
14 Platform one Researchers at the Massachusetts Institute of Technology (MIT) and Microsoft have developed a computer model that identifies instances where the training of an autonomous system differs from what’s happening in the real world (writes Nick Flaherty). These ‘blind spots’ are a key challenge for machine-learning systems that are trained extensively in virtual simulations to prepare a vehicle for nearly every event on the road. Sometimes the car makes an unexpected error in the real world because an event occurs that should, but doesn’t, alter the car’s behaviour. As the AI system is trained using simulations, a human closely monitors the system’s actions as it acts in the real world, providing feedback on when the system made – or was about to make – any mistakes. The researchers then combine the training data with the human feedback data, and use machine-learning techniques to produce a model that pinpoints situations where the system is most likely to need more information about how to act correctly. The next step is to incorporate the model with traditional training and testing approaches for autonomous cars and robots with human feedback. “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents,” said Ramya Ramakrishnan at MIT’s Computer Science and Artificial Intelligence Laboratory. “The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so that we can reduce some of those errors.” Some traditional training methods do provide human feedback during real- world test runs, but only to update the system’s actions. These approaches don’t identify blind spots. Once the feedback data from the human is compiled, the system essentially has a list of situations and, for each situation, multiple labels saying its actions were acceptable or unacceptable. These ambiguous situations are labelled as blind spots. But that goes beyond simply tallying the acceptable and unacceptable actions for each situation. “Because unacceptable actions are far rarer than acceptable actions, the system will eventually learn to predict all situations as safe, which can be extremely dangerous,” said Ramakrishnan. So the researchers used a machine- learning method called the Dawid-Skene algorithm that is commonly used for crowdsourcing to handle label noise. It takes as its input a list of situations, each having a set of noisy ‘acceptable’ and ‘unacceptable’ labels. It then aggregates all the data and uses some probability calculations to identify patterns in the labels of predicted blind spots and patterns for predicted safe situations. Using that information, it outputs a single aggregated ‘safe’ or ‘blind spot’ label for each situation, along with its confidence level in that label. Notably, the algorithm can learn in a situation where it may, for instance, have performed acceptably 90% of the time, but the situation is still ambiguous enough to merit a blind spot. In the end, the algorithm produces a type of ‘heat map’, where each situation from the system’s original training is assigned a low-to-high probability of being a blind spot for the system. Training systems for the real world Research February/March 2019 | Unmanned Systems Technology Dr Donough Wilson Dr Wilson is innovation lead at aviation, defence, and homeland security innovation consultants, VIVID/futureVision. His defence innovations include the cockpit vision system that protects military aircrew from asymmetric high-energy laser attack. He was first to propose the automatic tracking and satellite download of airliner black box and cockpit voice recorder data in the event of an airliner’s unplanned excursion from its assigned flight level or track. For his ‘outstanding and practical contribution to the safer operation of aircraft’ he was awarded The Sir James Martin Award 2018/19, by the Honourable Company of Air Pilots. Paul Weighell Paul has been involved with electronics, computer design and programming since 1966. He has worked in the real-time and failsafe data acquisition and automation industry using mainframes, minis, micros and cloud-based hardware on applications as diverse as defence, Siberian gas pipeline control, UK nuclear power, robotics, the Thames Barrier, Formula One and automated financial trading systems. Ian Williams-Wynn Ian has been involved with unmanned and autonomous systems for more than 20 years. He started his career in the military, working with early prototype unmanned systems and exploiting imagery from a range of unmanned systems from global suppliers. He has also been involved in ground-breaking research including novel power and propulsion systems, sensor technologies, communications, avionics and physical platforms. His experience covers a broad spectrum of domains from space, air, maritime and ground, and in both defence and civil applications including, more recently, connected autonomous cars. Unmanned Systems Technology’s consultants
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4