32 Dossier | LOXO Alpha & Digital Driver fourth Lidar installed on the back, again to ensure effective coverage around a vehicle of the ID Buzz’s size.” The last perception sensors onboard are ultrasonics, integrated about the vehicle for low-speed, energy-efficient detection of surrounding objects during parking manoeuvres. For conventional localisation input, Alpha integrates satellite navigation receivers with both differential GNSS and RTK GNSS available. The former provides higher accuracy than standalone GNSS thanks to some corrections in real time, and the latter provides the highest possible accuracy to centimetric levels (as many of our readers will know). Beyond all of these sensors, significant further processes and technologies onboard feed into the LOXO Alpha’s localisation methods, as the company explained to us. End-to-end AI In our past investigations of self-driving road vehicles, we have seen vehicles and software that used forms of simultaneous localisation and mapping (SLAM), combining multiple low-level sensor inputs to understand their location and surroundings. We have also seen perception systems using machine vision based on convolutional neural networks (CNNs) and image data-based training to recognise and classify objects, placing annotated bounding boxes around them to signify image recognition and segmentation to developers and remote monitoring staff. The latter perception system will also sometimes inform a separate onboard localisation system when recognising landmarks or clusters of structures specific to known streets. Both systems are well-established, and are often used to inform additional onboard algorithms for path planning and control, as is the case in the LDD and the Alpha. However, certain perceived insufficiencies drove LOXO to develop a third, more unique component of its driving intelligence, which typically takes precedence in self-driving decisions. “If you engineer your perception, localisation and path planning as separate input streams, you’ll eventually have to integrate and harmonise for the control system, and this is where problems start to appear,” Amini says. “In software engineering, integrating different functions is the hardest challenge. It can cause lots of safety issues because you might have multiple different AI models and hence conflicting sets of logic, and it’s not always clear in the first place what the best way to combine differing AI models is. You can wind up losing considerable vehicle performance because engineers are forced to program the vehicle to constantly fall back on the safest of its available movement or braking options due to the lack of confidence in the data and decisions that get through.” Therefore, while both SLAM and a conventional perception algorithm trained through a CNN are used in the LDD and Alpha, LOXO has also engineered an autonomy approach it refers to in-house as End-to-end, which its systems use for their primary navigation and control intelligence, thereby avoiding the hamstringing of vehicle performance that may otherwise have been incurred. This approach is centred around an AI model called LOXO Fuser, which consumes the raw, unprocessed sensor data inputs and uses them to output vehicle commands. Rather than the model being based on separate localisation, perception and pathplanning algorithms, its workings have been trained and optimised using transformers (also called transformer neural networks or TNNs). “ChatGPT is well known to many now – the ‘T’ stands for ‘transformers’. They are a fairly new mathematical discipline, now April/May 2025 | Uncrewed Systems Technology Sensors are integrated conformally into the Alpha’s body, including three InnovizOne Lidars, with LOXO preferring their solid-state architecture over conventional spinning mirror Lidars
RkJQdWJsaXNoZXIy MjI2Mzk4