The prima facie assumption in Sam Harris' AI speech is that robot, machines, and software will almost certainly lead to the apocalypse. Though it might seem outlandish, Harris explains how humanity's current attitude toward such a prospect as well as ever quickening advancements in technology make the AI revolution a reasonable prospect.
In terms of the technological development, Harris posits that such developments are too beneficial to stop. The only reason technology would come to a halt would be because humans could literally no longer invent; some massive global catastrophe would have to occur, essentially. In other words, as long as humans can advance, humans will advance.
This brings up the prospect of eventually building machines that are smarter than humans. While that wouldn't automatically result in a malevolent robot apocalypse, the machines might treat humans the same way humans treat ants -- with complete indifference.
Though Harris offers no solution to the problem, he implores everyone working in artificial intelligence to consider it before delving headlong into what turns out to be morally confusing territory.
Controlling Artificial Intelligence
More Stats +/-
The Realities of Death
Faith as a Failed Science
Religion's Role in Violence
Super Intelligent Computers