Unmanned Systems Technology 013 | AutonomouStuff Lincoln MKZ | AI systems | Unmanned Underwater Vehicles | Cosworth AG2 UAV twin | AceCore Neo | Maintenance | IDEX 2017 Show report

34 Focus | AI systems The training is also used to cope with variations in the design of the vehicle. While the neural net will provide the best performance using images from the same camera in the same place on the vehicle, a production system needs to be more robust so it can cope with different types of camera sensor with dirt and scratches on the lens and in different positions on the vehicle. That can be addressed by using synthetic models – artificial 3D models of the environment with 3D images of cars and people – that also train the neural networks. The advantage here is that all the information in the model is known and can be varied quickly and easily to train the neural net, and so produce a more accurate result across a wider range of inputs such as different views or qualities of sensor. The drawback though is that the model is less complex than an image, so the results may not correlate as effectively with real life. As a result, neural nets are being trained with a mixture of real and synthetic data. However, that leads to problems with assessing how accurate the net is, and the only way to measure that is to test it out in real situations. There is also a trade-off between the depth (the number of layers) of a neural net, the accuracy of the results and the training time. Some neural network algorithms have up to 50 layers – the human brain, by contrast, is estimated to have ten – and these ‘deep’ nets produce more accurate results but take much longer to train. All this is termed semantic segmentation – breaking down a scene into the separate elements, each with its own label. While a deeper net can produce a more fine-grained segmentation, however, it can also be over-trained and not deliver the semantic segmentation results that are needed. The optimum depth for a particular application and type of neural network has not really been determined yet, and is subject to intensive research. Inference engine The depth and type of neural net does not necessarily have an impact on the deployment of the net in a design, however. The number of layers in the net being deployed in a vehicle – the inference engine – does not put particular pressure on the processing requirement of a chip unless there is a learning function included. There is also limited correlation between the technology used for the training engine (which is often implemented on a large server) and the inference engine being deployed. The neural net algorithm can be implemented on a traditional x86 sequential processor, a massively parallel graphics processing unit (GPU) or a new class of processor optimised for this type of application. The advantages vary much more in the training engine. All this is still just one part of the AI system, and is used to build an internal model of the unmanned vehicle’s surroundings. The AI output is then combined with additional sensor data, where more processing is required. Further analysis can then determine the different types of vehicle for another part of the system to infer the likely behaviour of different drivers. This is part of what is called agent intention modelling: to understand what other agents such as cars and pedestrians are going to do. This can be more rule- based, but relies on higher resolution data from the neural net. For example, predicting a pedestrian’s intentions can be enhanced by knowing where they are looking, the turn of their head and the orientation of their head. This is all data that human drivers use unconsciously but that has to be explicitly included in the model and coded into the intention model. Path planning Combining the internal model, agent intentions, position data and route April/May 2017 | Unmanned Systems Technology The MACE III vehicle using AI to drive autonomously on the proving ground at Horiba MIRA (Courtesy of Horiba MIRA)

RkJQdWJsaXNoZXIy MjI2Mzk4