Issue 40 Unmanned Systems Technology October/November 2021 ANYbotics ANYmal D l AI systems focus l Aquatic Drones Phoenix 5 l Space vehicles insight l Sky Eye Rapier X-25 l FlyingBasket FB3 l GCS focus l AUVSI Xponential 2021
10 Platform one Nvidia is working on ways to improve the performance of machine learning vision systems for autonomous vehicles by using the open-source ROS 2 operating system (writes Nick Flaherty). Nvidia and Open Robotics are developing ROS2 software blocks to run on Nvidia’s Jetson edge AI platform and GPU-based systems. The key issue is improving the management of data flow and shared memory across GPU and other ARM processors on the Jetson platform. This will significantly improve the performance of applications that have to process high- bandwidth data from sensors such as cameras and Lidars in real time. Software resulting from this collaboration is expected to be released in the spring of next year. Open Robotics is also working to link its Ignition Gazebo simulation tool with Nvidia’s Isaac Sim on Omniverse. Isaac Sim already supports the ROS 1 and 2 operating systems, and this integration will provide a full robotics and synthetic data generation tool with streamlined integration with a platform for training neural network image detection systems in virtual environments. Ignition Gazebo is already being used on developments for the Subterranean Challenge organised by the US DARPA research agency. The challenge aims to stimulate the development of navigation and sensing systems for autonomous systems in underground mining and search & rescue applications. Connecting the two environments will allow ROS developers to easily move their robots and environments between Ignition Gazebo and Isaac Sim, to run large-scale simulations and take advantage of features in each simulator, including high-fidelity dynamics, accurate sensor models and photorealistic rendering to generate synthetic data for training and testing AI models. Nvidia is optimising its GEM software packages for ROS for faster image processing and more efficient neural network perception models. The packages run on the GPU, and so reduce the load on the host CPU while providing significant performance gains. The new GEMs include a stereo disparity and point-cloud package, colour space conversion and lens distortion correction, and detection of AprilTags. These allow the derivation of positioning in three dimensions using a single camera and are used for object tracking and visual localisation. The faster the system can detect the position and orientation of an object pose of a tag, the faster the closed loop can be, giving more responsive behaviour. A native ROS 2 package wraps Nvidia’s GPU-accelerated AprilTag detector for fast detection on a GPU. In addition to being a robotic simulator, Isaac Sim can generate synthetic data to train and test perception models. This allows the machine learning visual perception frameworks to be tested in a virtual environment to improve the quality of the data. Once Isaac Sim generates synthetic datasets, they can be fed directly into Nvidia’s TAO as well. This is an AI model adaptation platform that adapts perception models for the specific working environment for an autonomous system. This allows a perception stack to be tested in a given working environment to be started before data has been collected from the target surroundings. Vision systems get AI boost Machine learning October/November 2021 | Unmanned Systems Technology The latest version of the Robot Operating System (ROS 2) is being optimised for robot systems using the Jetson TX AI technology
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4