Science fiction writers have long reveled in the trope of superintelligent computers that, upon gaining some form of sentience, turn on their human creators, but Grady Booch's AI discussion aims to dispel that pesky cultural notion.
There's no question that developments in artificial intelligence are moving forward at a staggering pace. 20 or so years ago, a computer beating a human in a single game of chess was considered a triumph, but today people get upset if it takes a computer more than five seconds to recognize their current location and desired destination, map an efficient route between them, offer alternatives, and point out some interesting stops along the way. Yet there's still hand wringing over the future of AI, one in which robot intelligence is equal to or greater than that of humans.
The reason that AI might not be so fearful in the future has to do with how its created. Modern AI isn't built, but rather taught. To teach AI to recognize flowers, one simply shows it thousands of pictures of flowers until the computer is able to recognize the pattern consistently. The same could be said of any subject: to teach a computer the law, the AI would be fed thousands of legal statutes and court cases. Ultimately, as long as human systems are pure and good at their core, so too will be AI.