Digital List Price: | 248.98 |
M.R.P.: | 550.00 |
Kindle Price: | 195.86 Save 354.14 (64%) |
inclusive of all taxes | |
Sold by: | Amazon Asia-Pacific Holdings Private Limited |

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet or computer – no Kindle device required. Learn more
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera, scan the code below and download the Kindle app.
![Superintelligence: Paths, Dangers, Strategies by [Nick Bostrom]](https://m.media-amazon.com/images/I/51mBTpekidL._SY346_.jpg)
Superintelligence: Paths, Dangers, Strategies Reprint Edition, Kindle Edition
Price | New from |
Audible Audiobook, Unabridged
"Please retry" |
₹0.00
| Free with your Audible trial |
MP3 CD, Audiobook, MP3 Audio, Unabridged
"Please retry" | ₹2,063.00 |
If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological
cognitive enhancement, and collective intelligence.
This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- ISBN-13978-0198739838
- EditionReprint
- PublisherOUP Oxford
- Publication date2 July 2014
- LanguageEnglish
- File size2707 KB
- Kindle (5th Generation)
- Kindle Keyboard
- Kindle DX
- Kindle (2nd Generation)
- Kindle (1st Generation)
- Kindle Paperwhite
- Kindle Paperwhite (5th Generation)
- Kindle Touch
- Kindle Voyage
- Kindle
- Kindle Oasis
- Kindle Fire HD 8.9"
- Kindle Fire HD(1st Generation)
- Kindle Fire
- Kindle for Windows 8
- Kindle Cloud Reader
- Kindle for BlackBerry
- Kindle for Android
- Kindle for Android Tablets
- Kindle for iPhone
- Kindle for iPad
- Kindle for Mac
- Kindle for PC
Product description
About the Author
Review
I highly recommend this book ― Bill Gates
very deep ... every paragraph has like six ideas embedded within it. ― Nate Silver
Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era ― Stuart Russell, Professor of Computer Science, University of California, Berkley
Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book ― Martin Rees, Past President, Royal Society
This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? ― Max Tegmark, Professor of Physics, MIT
Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever ― Olle Haggstrom, Professor of Mathematical Statistics
Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking ― The Economist
There is no doubting the force of [Bostrom's] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake ― Financial Times
His book Superintelligence: Paths, Dangers, Strategies became an improbable bestseller in 2014 ― Alex Massie, Times (Scotland)
Ein Text so nüchtern und cool, so angstfrei und dadurch umso erregender, dass danach das, was bisher vor allem Filme durchgespielt haben, auf einmal höchst plausibel erscheint. A text so sober and cool, so fearless and thus all the more exciting that what has until now mostly been acted through in films, all of a sudden appears most plausible afterwards. (translated from German) ― Georg Diez, DER SPIEGEL
Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes ― Elon Musk, Founder of SpaceX and Tesla
A damn hard read ― Sunday Telegraph
I recommend Superintelligence by Nick Bostrom as an excellent book on this topic ― Jolyon Brown, Linux Format
Every intelligent person should read it. ― Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
An intriguing mix of analytic philosophy, computer science and cutting-edge science fiction, Nick Bostrom's Superintelligence is required reading for anyone seeking to make sense of the recent surge of interest in artificial intelligence (AI). ― Colin Garvey, Icon --This text refers to the paperback edition.
Product details
- ASIN : B00LOOCGB2
- Publisher : OUP Oxford; Reprint edition (2 July 2014)
- Language : English
- File size : 2707 KB
- Text-to-Speech : Enabled
- Screen Reader : Supported
- Enhanced typesetting : Enabled
- X-Ray : Not Enabled
- Word Wise : Enabled
- Print length : 431 pages
- Best Sellers Rank: #5,049 in Kindle Store (See Top 100 in Kindle Store)
- #2 in Computer Science eTextbooks
- #11 in Computer Science (Kindle Store)
- #53 in Computer Science Books
- Customer Reviews:
About the author

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
For more, see www.nickbostrom.com
Customers who read this book also read
Customer reviews

-
Top reviews
Top reviews from India
There was a problem filtering reviews right now. Please try again later.
It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.
What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.
The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.
I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.
I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.
However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
The quest to create artificial intelligence is usually thought of as a crankish pursuit. But, says Bostrom, it could happen, perhaps within a few decades, perhaps within a few centuries. If it happens, then it will be the greatest change in human history. It could very easily be the end of human history.
Much of his warning sounds rather like the ancient Jewish myth of the golem, which destroyed its creator by following its instructions literally. If we build a machine that is much more intelligent than we are to do our bidding, without taking enormous care in defining what our bidding is, it could backfire in the most spectacular way. (His book opens with a parable of a group of sparrows saying that they really ought to find an owl chick and raise it as a servant.)



Every History of AI & Future is predicted very well... One of best book on AI. Buy it!


Top reviews from other countries

All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don't doubt Bostrom's skills with probability calculations or formalizations, but the principle "garbage in - garbage out" applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn't taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. "[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution" [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches ("We could postpone work on some of the eternal questions for a little while [...] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors" [p. 315]), and ultimately claims that "reduction of existential risk" is humanity's principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their "extreme end of the intelligence distribution" think this money would be better spent funding fellowships for philosophers and AI researchers working on the "control problem". Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom's publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!
Despite the criticism I've given above, the book isn't necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it's not bad. But if you're looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge - no magic allowed! - then this is definitely not the book for you.

I used to fear ai but now I know how far away we are from any real world dangers. Ai is still very early and there are some enormous obstacles to get past before we see real intelligence that beats the Turin test/imitation game every single time. Infact, some experts say that the Turin test is too easy and we need to come up with a better method to measure the abilities and limitations of an ai subject. I agree with that.
Extreamly interesting read. Great book.


Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.
This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.
Also a good understanding of economic theory would also help any reader.
Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.
At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.
Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.
“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.

The one area in which I feel Nick Bostrom's sense of balance wavers is in extrapolating humanity's galactic endowment into an unlimited and eternal capture of the universe's bounty. As Robert Zubrin lays out in his book Entering Space: Creating a Space-Faring Civilization , it is highly unlikely that there are no interstellar species in the Milky Way: if/when we (or our AI offspring!) develop that far we will most likely join a club.
The abolition of sadness , a recent novella by Walter Balerno is a tightly drawn, focused sci fi/whodunit showcasing exactly Nick Bostrom's point. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end has already happened...