Unmanned Systems Technology 028 | ecoSUB Robotics AUVs I ECUs focus I Space vehicles insight I AMZ Driverless gotthard I InterDrone 2019 report I ATI WAM 167-BB I Video systems focus I Aerdron HL4 Herculift

58 (particularly between 7 and 15 m away), and integrates a lens with a 12 mm focal length and 54.5° horizontal FOV. Each camera arrangement has its own algorithmic pipeline for detecting and positioning the cones. For the monocular pipeline, points are detected and arranged in ‘bounding boxes’ (to group them together according to their cone) using keypoint regression calculations. The 3D position of a cone is then estimated via a PnP (perspective-n- point) algorithm. For the stereo cameras, detected features are matched in corresponding bounding boxes and triangulated to get 3D position estimates of the cones. The camera unit is mounted in the roll hoop behind where the driver’s head would ordinarily go, to give it an optimal field of view by keeping it as high as practical. In addition to these perception sensors, a laser-based ground speed sensor is mounted between the nose and front wheels. Wheel speed sensors are installed in the wheels, and a steering angle sensor is installed in the steering rack. The team used a modified version of the YOLOv2 convolutional neural network to help the autonomy engine recognise the colour and shape of the cones, and plot an effective path through them at rapid pace via the vision sensor. To process all the raw data and output optimal commands to the drivetrain, through an electronic control unit, two CPUs were installed in a master-slave configuration. The slave runs only non- essential software such as processing the monocular camera data, in case of a failure of that system. The master system is a PIP39 rugged industrial computer from MPL, while the slave system is an Nvidia Jetson TX2. An Nvidia GTX 1050Ti served as the GPU, as such a system is critical for the machine vision software. A small fan, a heat sink and a water block for the CPUs and GPU, as well as an Ethernet switch, were included in the housing design. SLAM To localise itself within the course, the main computer integrated the fastSLAM 2.0 algorithm, chosen for its particle filter’s ability to provide multi-hypothesis data associations. This was key for compensating for the scarcity of unique landmarks on the cone-marked track. The SLAM engine required inputs of cone detection (either laser- or vision- based, or both), as well as velocity estimates. The latter of these was calculated using data from the wheel speed sensors, ground speed sensor and the onboard navigation system. The navigation system uses a spatial dual GNSS-INS from Advanced Navigation, which integrates a dual- antenna system to gather information about the vehicle’s heading. “That heading data is critical for velocity estimation, so maintaining measurements accurate to 0.08° was a core objective for the navigation component,” says Buhler. The actual mapping is largely complete after the first lap; after that, subsequent measurements of cones provide no significant improvement. Once the vehicle detects a closure of the first lap (by ascertaining similar cone positions and heading to them at the start of the race), it switches to a localisation-focused operating mode. This mode is centred on vehicle dynamics control for the remaining nine laps, to get the fastest possible time and eliminate the chances of accidentally hitting a cone. It selects the steering and throttle actions based on minimising key ‘cost’ variables such as the time taken to progress along the track, the ‘slip’ of the car and so on. Power and drivetrain The gotthard is powered by lithium cobalt oxide battery cells, which deliver energy to four wheel-hub motors. The key advantage of having two motors in the front – apart from the extra acceleration this provides – is that the vehicle can also decelerate quite hard, October/November 2019 | Unmanned Systems Technology The vision sensor is mounted atop the ‘driver’s’ seat, in the roll hoop, to ensure the highest and therefore greatest FOV while still keeping it in a reasonably protected space where it won’t affect drag Exploded view of the camera sensor’s architecture

RkJQdWJsaXNoZXIy MjI2Mzk4