People have always felt a sense of unease and even dread at the prospect of supremely intelligent machines, and in Stuart Russell's AI lecture, he hopes to allay some of those fears.
There are, of course, legitimate problems that can arise from increasingly powerful AI. For one, programmers need to be sure that they give their systems the right objectives. To use Russell's mythical example, King Midas wished for everything he touched to be turned to gold, and he got precisely that. Programmers need to be King Midas' problem when creating task-driven AI.
In an effort to solve this and other problems, Russell has worked to create a set of three principles for what he calls human-compatible AI. The first is that "the robot's only objective is to maximize the realization of human values." The second states that the robot doesn't know what those human values are initially. Finally, the third principle is that robots will acquire knowledge about human values by observing human behavior.
Programming Safer AI
More Stats +/-
Future Festival & Malcolm Gladwell - Early Bird Deadline
Future Tech Innovations
The Future of Hybrid Thinking