In his artificial intelligence talk, Nick Bostrom theorizes on the future of machine intelligence and discusses what happens when computers get smarter than humans. While it's easy to dismiss this line of thinking as science fiction, the speaker breaks down why on a biological level, this is not so far out or crazy.
The speaker estimates by 2040 or 2050 artificial intelligence will match human levels and will then rocket past it considerably quickly. When this happens, power will become an issue where the fate of humanity will rely on how super intelligence behaves towards us. The artificial intelligence talk suggests thinking of super intelligence as an optimization process and explains the necessity of incorporating everything you care about in an objective you're asking a machine to achieve.
The artificial intelligence talk states we shouldn't be confident in our ability to control supper intelligence. His solution is to determine how to create super intelligence so even when it's fully unleashed it's still safe because it's fundamentally on our side because it shares our values. This would involve creating AI that uses its intelligence to learn what we value and is motivated to pursue them and act in a way we'd approve. Initial conditions would need to be set up in the right way, because this would not happen automatically.