UST034

14 Platform one Dr Donough Wilson Dr Wilson is innovation lead at aviation, defence, and homeland security innovation consultants, VIVID/ futureVision. His defence innovations include the cockpit vision system that protects military aircrew from asymmetric high-energy laser attack. He was first to propose the automatic tracking and satellite download of airliner black box and cockpit voice recorder data in the event of an airliner’s unplanned excursion from its assigned flight level or track. For his ‘outstanding and practical contribution to the safer operation of aircraft’ he was awarded The Sir James Martin Award 2018/19, by the Honourable Company of Air Pilots. Paul Weighell Paul has been involved with electronics, computer design and programming since 1966. He has worked in the real-time and failsafe data acquisition and automation industry using mainframes, minis, micros and cloud-based hardware on applications as diverse as defence, Siberian gas pipeline control, UK nuclear power, robotics, the Thames Barrier, Formula One and automated financial trading systems. Ian Williams-Wynn Ian has been involved with unmanned and autonomous systems for more than 20 years. He started his career in the military, working with early prototype unmanned systems and exploiting imagery from a range of unmanned systems from global suppliers. He has also been involved in ground-breaking research including novel power and propulsion systems, sensor technologies, communications, avionics and physical platforms. His experience covers a broad spectrum of domains from space, air, maritime and ground, and in both defence and civil applications including, more recently, connected autonomous cars. Unmanned Systems Technology’s consultants October/November 2020 | Unmanned Systems Technology Cambridge start-up RoboK has developed 3D sensing and perception software to support the development and testing of autonomous systems (writes Nick Flaherty). “Advanced driving features, which range from collision avoidance and automatic lane-keeping to fully automated driving, require miles of road test driving to ensure their safety,” said Hao Zheng, co-founder and CEO of RoboK. “Although simulation provides a resource-efficient alternative, it can be time-consuming. To accurately and realistically simulate all elements in the entire system can take many hours to run and process, even for a single driving scenario. “Using the proprietary software developed by our team, which can run on general-purpose and low-power computing platforms, we can shorten the processing time from hours to seconds, drastically improving the efficiency of system-level validation and testing.  “We have reduced the computation time by developing a new method for fusing raw data directly from a range of sensors, such as cameras, radars, GPS and IMU, as well as for performing depth estimation to gain 3D information – all running on low-power computing platforms. “This significantly reduces the memory and computing requirement.” By fusing all this sensor information, a more complete picture can be created that takes advantage of the strengths of each type of sensor, and offers more contextual information. Overlap between the sensors also improves perception. However, multi- modal sensor systems produce massive amounts of data, and the processing is computationally intensive. High-level or object-level sensor fusion is where each sensor, through embedded software, provides an object list that includes information on an object such as its position, class, size and velocity. A certain amount of processing takes place at the sensor, and only the object-level information will be used in the pipeline, reducing the memory and computational requirements. This distributed sensor fusion system requires less computation at the central level, but relies on accurate perception from individual sensors. For example, challenging weather or lighting conditions would affect performance of the vision system. RoboK has combined low-level and object-level sensor fusion to develop a more efficient means of 3D perception that can help reduce the computational requirement of complex autonomous vehicle sensor fusion. Rather than using deep-learning models directly with raw sensor data that require power-hungry GPU processors, RoboK uses highly optimised models to process the sensor data to detect, localise objects and estimate depth simultaneously. This approach of fusing and processing raw sensor data with depth estimation is computationally efficient and can be performed on low-power embedded systems without the need for hardware accelerators. This lower computational requirement allows a high-fidelity digital twin of the sensor system to be created. That can help designers make architecture-level decisions without having to build the systems beforehand. Working with the PAVE360 design validation system, from Siemens Digital Industries Software, RoboK used its 3D perception module in a digital twin demonstration of an autonomous emergency braking system.  Code cuts test times Driverless vehicles

RkJQdWJsaXNoZXIy MjI2Mzk4