Unmanned Systems Technology 042 | Mayflower Autonomous Ship | Embedded Computing | ElevonX Sierra VTOL | UUVs insight | Flygas Engineering GAS418S | Ocean Business 2021 report | Electric motors | Priva Kompano

7 Platform one Unmanned Systems Technology | February/March 2022 Engineers at Caltech, ETH Zurich and Harvard have developed a machine learning technique that allows autonomous craft to navigate in the oceans (writes Nick Flaherty). “When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 ft away at the surface,” said Professor John Dabiri at Caltech. “We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves.” The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm that could power seaborne craft. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans. Robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa. The researchers used reinforcement learning (RL) networks rather than conventional deep neural ones. RL networks do not train on a static data set but rather train as fast as they can collect experience. The code was installed on a microcontroller board called Teensy that measures 10 x 5 mm and uses about 500 mW of power. Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform. “We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” said Prof Dabiri. The latest Teensy 4.0 board uses a 600 MHz ARM Cortex-M7 processor from NXP that is compatible with the Arduino development tools. The team developed the Caltech Autonomous Reinforcement Learning Robot, or CARL- bot, with the board. It was dropped into a two-storey water tank on Caltech’s campus to teach it to navigate currents. The researchers found that a velocity sensing approach significantly outperformed a biomimetic vorticity sensing approach. A naive policy of swimming towards the target was highly ineffective, as swimmers using this approach were swept away by the background flow, and reached the target only 1.3% of the time on average. Using RL with local flow information on the swirling vortices in the water saw an average success rate of 47.2% . The researchers were however surprised that using data from the velocity of the surrounding water showed a near-100% success rate, even with a complex, unsteady flow of water. This suggests that RL coupled with an onboard velocity sensor may be an effective tool for robot navigation.  The researchers are now looking at other types of flow disturbance it could encounter on a mission in the ocean, such as swirling vortices versus streaming tidal currents, by adding ocean-flow physics in the RL strategy. Navigation AI system for ocean studies The CARL-bot can navigate in the oceans using reinforcement learning

RkJQdWJsaXNoZXIy MjI2Mzk4