Unmanned Systems Technology 007 | UMEX 2016 report | Navya ARMA | Launch & recovery systems | AIE 225CS | AUVs | Electric motors | Lethal autonomous weapons
80 in human-robot interaction studies. AI software can now recognise human poses in real time from 3D camera imagery, a factor at play in work that seeks to develop anticipation capabilities in robots. For example, a group of researchers in German universities is working on probabilistic movement modelling for intention inference in human-robot interaction. They are designing an intention-driven dynamics model to infer intention from observed movements using Bayes’ theorem, which starts with an educated guess known as a ‘prior’ and then iteratively refines it in the light of new information. When applied to persistent surveillance systems, the ability to recognise and categorise people and what they are doing has clear implications for the needle-in-a-haystack task of extracting patterns of activity that indicate an impending terrorist attack. In his lecture, Prof Russell says the motives for developing AI include its ‘cool’ factor, researchers’ confidence that they can succeed and the fact that intelligence in general is responsible for civilisation, so more is better. He cautions however that researchers, policy makers and the wider community ought to think very carefully about what might happen if they do succeed. What if we succeed? In a paper published in 1965, IJ Goode, a statistician who worked with computer pioneer Alan Turing, said, “The first ultra-intelligent machine is the last invention that man need ever make. An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. It is curious that this point is so seldom made outside of science fiction.” Success in developing ultra-intelligent machines, Prof Russell says in his lecture, could be the biggest event in human history. “We don’t want it to be the last event in human history,” he adds. Characterising what is going wrong now, he says it is as if humanity had received an email from a superior alien civilisation warning that they would arrive in 30-50 years’ time, and humanity’s only response was an out of office auto-reply. We basically haven’t thought about it, he adds. “And there are lots of silly things that people are saying,” he went on to say. “Some people talk about machine IQ, which doesn’t exist. And they say that this IQ is increasing exponentially with Moore’s Law, which is rubbish, because Moore’s Law only allows you to get the wrong answer faster. “Some people think the risk of AI comes from armies of blood-crazed robots, and then they start showing pictures of Terminators. Some people think the risk comes from spontaneous robot consciousness, that somehow AI is just going to go rogue and things are going to become conscious and malevolent. And then they show pictures of Terminators. “In fact any time a journalist mentions AI, they will show pictures of Terminator robots. I keep trying to tell them not to, but they can’t help themselves.” That is why there are absolutely no references to Terminators in this article. Give AI the right goals The real risk, Prof Russell argues, lies in something much simpler and more straightforward. It is that AI machines are becoming extremely good at achieving the goals that humans set them, such as winning games of chess or Go. April/May 2016 | Unmanned Systems Technology The successful development of legged locomotion in robots is an achievement that has been dubbed a ‘Holy cow! moment’. This is the Alpha Dog, developed to carry equipment and supplies for soldiers (Courtesy of Boston Dynamics) The first ultra-intelligent machine is the last invention that man need ever make. It is curious that this point is so seldom made outside science fiction
Made with FlippingBook
RkJQdWJsaXNoZXIy MjI2Mzk4