In a recent article in the Bulletin of the Atomic Scientists, Susan D’Agostino presents provocative views faced by the artificial intelligence research community. She posited whether programmers and researchers, akin to practitioners in the medical profession, need their own Hippocratic Oath to do no harm.
Her article begins with some unsettling thoughts from AI programmed robots. She writes, “When a sentient, Hanson Robotics robot named Sophia was asked whether she would destroy humans, it replied, ‘Okay, I will destroy humans.’ Philip K Dick, another humanoid robot, has promised to keep humans ‘warm and safe in my people zoo.’ And Bina48, another lifelike robot, has expressed that it wants ‘to take over all the nukes.’”
As famed physicist Stephen Hawking warned, left unrestrained, AI could become an enemy to mankind and “decide” to eliminate us as a threat. Left to evolve without limitations, AI may well unleash a holy terror on society akin to Arnold Schwarzenegger in the Terminator. As D’Agostino points out, there’s no easy answer but there is clearly a need to deal with the unintended consequences of the uncontrolled advancement of AI. With all the good it can do, if left without restraints, the evil it can perpetrate may far outweigh the benefits.
But who is going to step in and establish limits? The European Union is trying with legislation but their approach is not practical according to D’Angostino. The scientists and others in the game are most certainly not asking for regulation.
I explore the dark side of AI in my novel, Dragon on the Far Side of the Moon, where the extremes of “emotionless, logical thinking” of a super computer and AI leads to an unsettling conclusion. While the story is fiction today, one cannot help but wonder if it may be reality tomorrow.