Unmanned Systems Technology 013 | AutonomouStuff Lincoln MKZ | AI systems | Unmanned Underwater Vehicles | Cosworth AG2 UAV twin | AceCore Neo | Maintenance | IDEX 2017 Show report

40 extract data elements from the data set and build a probability model based on the data it extracts. That ends up being a high-dimensional model with many connections, which is a major computational challenge. The challenge with GPUs though is that they were designed to perform dense vector processing on low- dimensional models, taking a 3D model and rendering it onto a 2D screen. Running problems with a high- dimensional space on such systems means the data is spread out, creating what’s known as a ‘sparse data problem’. That means the GPU doesn’t necessarily run at its full efficiency. The graph-based architectures that are optimised for neural network calculations will provide performance that is orders of magnitude beyond what is being deployed at the moment. For AI systems in driverless cars in urban operation with no human supervision, the computing capability is not really available, but will be in the next two or three years: 150-200 teraflops on a standard GPU rather than 1-2 teraflops that is currently possible, with the graph chips then providing potentially ten times that performance again. Safety and certification AI is being implemented in autonomous systems at several levels. It can for example be used to replace a control loop with a machine learning algorithm in driverless vehicles and UAVs, up to the higher level of working out where a vehicle is, locating the edges of the road and driving the vehicle forward. Testing is being carried out at every level in the autonomous system, but knowledge about the decision-making process being used in the neural nets is still incomplete. Higher-level functions such as path planning and collision avoidance are implemented with different algorithms such as probabilistic filtering and rule-based algorithms, alongside the machine learning subsystems. One challenge is that every time images are used on a different network implementation, the end result is different, so testing a particular implementation against set results can be difficult. That means creating standardised test and validation systems will be difficult. There are parallels here with the aviation industry, where the automatic systems have to be as good as a human pilot, but what the validation philosophy will be for automotive system design has yet to be decided. Summary Autonomous vehicles in urban environments are not good at working out what other road users are likely to do. However, the combination of more accurate recognition systems based on deep learning, and the ability to predict how other road users will respond to a certain behaviour, are key steps to developing fully autonomous vehicles. A few companies are on the brink of getting past the first challenge of effective image recognition using deep learning, but the second challenge – reliable agent modelling – is probably three to five years away. That relies on effective agent intention modelling and predictive motion planning, and to do that effectively requires more from the computer vision systems than April/May 2017 | Unmanned Systems Technology Focus | AI systems Graph-based architectures optimised for neural network architectures will provide performance several orders of magnitude above current deployments Bosch is using the next-generation Xavier graphics processor from Nvidia to run deep learning algorithms in self-driving cars (Courtesy of Nvidia)

RkJQdWJsaXNoZXIy MjI2Mzk4