
A recent Oxford University Press collection that examines the ethics of artificial intelligence includes a chapter by Dr. Steve Petersen, associate professor of philosophy at Niagara University, on how a “superintelligence” – an AI much smarter than humans – would (or could) learn to be ethical.
Dr. Petersen’s chapter, “Superintelligence as Superethical,” which appears in Robot Ethics 2.0, rebukes an idea from a Nick Bostrom book that outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, wrote Bostrom, and if those goals are not explicitly favorable toward humans, the superintelligence will extinguish the human race—not through any malice, but simply because it will want human resources for its own purposes.
In response, Dr. Petersen offers a much brighter outlook. He argues that if the superintelligence must learn complex final goals, then this means that such a superintelligence must, in effect, reason about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent’s goals and another’s, that reasoning could therefore automatically be ethical in nature.
Dr. Petersen joined Niagara University’s philosophy department in 2006, after serving in a postdoctoral position at Kalamazoo College. He holds bachelor’s degrees in philosophy and math from Harvard University and a doctorate in philosophy from the University of Michigan.
To learn more about Niagara University, please visit www.niagara.edu.