
The book painstakingly explicates the philosophical implications of current research in developing a ‘superintelligence’, thereby offering tangible benefits from philosophical speculation while also drawing extensively from contemporary scientific developments. Nick Bostrom's Superintelligence: Paths, Dangers, Strategies directly challenges these attitudes.

We philosophers wrestle with the exact same problems the Pre-Socratics wrestled with. It can just have goals that do us damage.” Read more.Except for a patina of twenty-first century modernity, in the form of logic and language, philosophy is exactly the same now as it ever was it has made no progress whatsoever.

An AI doesn’t have to hate humans in the way Hollywood often shows them disliking us. It’s an absurd idea, but it shows the possibility of inadvertent damage. Then it turns its gaze towards the stars and thinks, ‘Well there’s an awful lot of planets out there and I can turn all those into paperclips!’ So it develops a space program and travels around the cosmos and turns the entire universe either into paperclips or something that makes paperclips. After a little while, it realizes, ‘Well these humans, they’re made of atoms, they could be turned into paperclips.’ So it turns us all into paperclips. The AI has the goal of maximizing the production of paperclips. He warns about the possibility not so much of a superintelligence going rogue-like Skynet, or HAL in 2001-but more simply of an immensely powerful entity that would not set out to damage us but have goals that could do us harm…He uses what he calls a ‘cartoon’ example: The first AGI turns out to be developed by someone who owns a paperclip manufacturing company. If we get AGI, the outcome could be absolutely wonderful. He’s said, for a long time, that Kurzweil is half right. “He is a philosophy professor at Oxford University and runs the Oxford Martin School’s Future of Humanity Institute.
