Hippocratic Oath for AI Researchers

Susan D’Agostino presents a growing dilemma facing the artificial intelligence research community in a chilling article for the Bulletin of the Atomic Scientists. She asks whether programmers and researchers, like those in the medical profession, need guardrails and ethical rules akin to the Hippocratic oath to do no harm. Without restraint, will the AI ​​unleash a holy terror on society similar to Arnold Schwarzenegger in Terminator? As she points out, this is not an easy answer, but there is clearly a need to deal with the unintended consequences of the unchecked advancement of AI. With all the good he can do, if left unrestrained, the harm he can do can far outweigh the benefits. As she concludes, “[S]Since AI’s potential to benefit humanity goes hand in hand with a theoretical possibility of destroying human life, researchers and the public might ask an alternative question: if this isn’t a Hippocratic oath? AI, so what?

Hanson Robotics robot named Sophia[1] Asked if she would destroy humans, he replied, “Okay, I’ll destroy humans.” Philip K Dick, another humanoid robot, promised to keep humans “warm and safe in my people’s zoo”. And Bina48, another realistic robot, said he wanted to “take control of all nuclear weapons”.


James G. Williams