Unmanned Systems Technology 007 | UMEX 2016 report | Navya ARMA | Launch & recovery systems | AIE 225CS | AUVs | Electric motors | Lethal autonomous weapons

81 Lethal autonomous weapons | Insight “If you don’t give it exactly the right goal, which means a goal that aligns perfectly with the goals of the human race – whatever the hell those are – then you set up a game,” he says. “And it is no longer a perfectly cooperative game; it is a game where there is not perfect cooperation between the human race and the machine. And unfortunately we don’t know how to specify the goal properly.” It is this that could lead to disaster, which the ancient Greeks illustrated perfectly in the myth of King Midas. Having pleased the god Dionysus, Midas was granted the power to turn literally everything he touched into gold, with predictable results. Being a magnanimous sort of god though, Dionysus let Midas off the hook, but humanity might not be so lucky with AI. As well as poorly thought-out primary goals, Prof Russell says, there are also instrumental goals: useful sub-goals that the AI sets for itself to help ensure it achieves its main goal. For example, an AI cannot calculate pi or cure cancer if someone switches it off or if it runs out of resources, so it might adopt self-protection and acquisition of more resources as such instrumental goals. “So automatically, unless you are extremely careful, you find yourself in conflict with the machine you set up to cure cancer or calculate pi,” he says. Combine a misaligned primary goal with those broadly applicable instrumental goals and you have a problem, he says, a problem illustrated in what he judges the most realistic portrayal of AI in science fiction, that of the HAL 9000 computer in Arthur C Clarke’s 2001: A Space Odyssey . After realising that the astronauts are plotting to disconnect him, HAL refuses to let one of them – Dave – back aboard the mothership after some work outside, on the grounds that he would jeopardise the mission. “I’m sorry Dave, I’m afraid I can’t do that,” HAL says. HAL’s chilling refusal is realistic, Prof Russell argues, because it does not require consciousness or malevolence on the part of the computer, only that HAL be extremely single-minded and competent at carrying out the primary goal it has been given. Locking Dave out was necessary to achieve the instrumental goal of self-protection. Provably beneficial AI The professor’s proposal is to stop doing AI as we do it now, which is just building machines that are extremely good at accomplishing the goals we give them. Instead, he says, we should ensure that the results will be provably beneficial by giving machines goals whose optimisation results in behaviour we like. That could be accomplished with a concept he calls cooperative inverse reinforcement learning, which would involve learning human values by observing all of human behaviour – what people do, how they are recognised, rewarded or punished. “That information contains within it implicitly a huge amount of data about human values. Even though human values are implicit, we don’t usually say what they are. They are inconsistent, we don’t [always] follow them. We can’t follow them because we are not perfectly rational, we don’t have the computational resources to actually do the right thing according to our own values,” he says. Another approach he characterises as ‘oracle AI’, in which the computer would be restricted to answering questions correctly. Here, formulating the questions properly would be critical to avoid getting useless answers, as the late, great Douglas Adams taught us in the Hitchhiker’s Guide to the Galaxy . Asked to consider “the ultimate question of life, the universe and everything”, the ultra-intelligent computer Deep Thought came up after thousands of years with a sonorous “Forty Two”. Third, he puts forward the idea of ‘super-intelligent verifiers’: AI programs that verify the safety of other super- intelligent agents before they are deployed. Naturally, that raises the question of who (or what) verifies the verifiers. As to the proposed ban on developing autonomous weapons, the genie looks to be out of the bottle, not just because much less smart weapons can already make choices according to predefined criteria, but because the line between AI decision aids that suggest courses of action – lethal or not – and human operators and systems that make those decisions is too easy to cross. Unmanned Systems Technology | April/May 2016 In the film 2001: A Space Odyssey , the HAL 9000 computer had been given a mission goal that turned out to be incompatible with the survival of the spaceship’s human crew (Courtesy of Cryteria via Wikipedia) Some people think the risk of AI comes from armies of blood- crazed robots or that AI will somehow just go rogue

RkJQdWJsaXNoZXIy MjI2Mzk4