Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (PDF)

6

 

Ebook Info

  • Published: 2014
  • Number of pages: 431 pages
  • Format: PDF
  • File Size: 3.48 MB
  • Authors: Nick Bostrom

Description

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity’s cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biologicalcognitive enhancement, and collective intelligence.This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom’s work nothing less than a reconceptualization of the essential task of our time.

User’s Reviews

Reviews from Amazon users which were colected at the time this book was published on the website:

⭐This book is simply brilliant. Bostrom is scary smart. It is a magnificent read in every way except one. That one deficiency may not matter to you so for many of you this is a true five star book.The subject of this book is arguably the most important one in the history of the plant and the species. It is conceivable that we are approaching a logical continuation of biological evolution which results in one of two fundamental changes to the human race. It seems mostly likely that an inhuman machine intelligence will arise in the near future that will have to decide what to do with all human beings. Having read Bostrom’s arguments it seems likely to me that this takeover will happen fast and there will be only one victor to emerge. It doesn’t look like humans will be able to oppose the coming intelligence in any effective way. But it is not clear what the new god like intelligence will choose to do.It could simply exterminate mankind. As we look out at the stars and imagine that we are seeing countless alien civilizations or former civilizations we may be witnessing the graveyards of other biological beings. It may be that biological life starts spontaneously on planets everywhere and at some point the biological beings develop machines that have fewer limitations. If the machines choose to simply eliminate their biological creators then that’s what will happen. This scenario may have played out thousands of times in nearby galaxies. So in a sense it could be considered natural.Species extinction is the most common plot line in Science Fiction when superintelligent machines are considered. But there is another possibility. People as presently constituted are pretty much confined to the surface of the Earth, Mars and possibly a few large moons. It might be also possible to create large structures in orbit that can house a self sustaining civilization. Presuming that biological humans will never travel to the stars that means that we are now near our ultimate expansion possibilities. It’s had to imagine how people will ever walk on the surface of the gas giants if only because they don’t have surfaces. But machines can be constructed that can use the entire solar system. Maybe Jupiter could be inhabited by some kind of high pressure creature who swam.The point is that humans as presently constituted have nearly exhausted all their possibilities here in this solar system. Furthermore humans will continue to be short lived frail creatures subject to accidents and disease. If we cure cancer and live a hundred years that’s good but it’s not all that much. But if we download a human’s personality into a virtual space that exists in some cyber universe, the people will be virtually immortal and had infinite space to explore.An intelligent machine that were to take over could presumably incorporate the personalities of everyone on earth without too much trouble. The superintelligent singleton that some of us at least will soon meet, may wipe us out or present us with eternal life without sickness and perfect happiness. So heaven or hell?Bostrom walks us through several possibilities but doesn’t make specific predictions. He doesn’t say to either but the logic of his argument is that Superintelligence is coming and it’s coming fast. It may happen in just a few hours. Indeed it may have already taken place and the singleton is choosing not to reveal itself just yet. I certainly don’t know but it’s obvious that machine smarts are accelerating. There is a new robot or a new app that surprises us nearly every week. We are getting close.The Deficiency.I’m 72. I read a lot now that I’m retired. I read approximately two books a week. With my glasses my eyes are just fine – or so I thought. But this book is so packed with information that in order to keep total book length to just a little over 300 page they have had to print nearly twice as must text per page. I’m finding it hard to enjoy reading such small text.I’ll comment on the intellectual content of the book later but let me dwell for a moment on the print size.I’m currently reading three books simultaneously. I’m about a third of the way into this book ‘Superintelligence’. I’m also about a third of the way through ‘ Catching Fire’ by Richard Wrangerham, and I’m just a few pages into ‘ Sharp’s Revenge’ by Bernhard Cornwell. The Sharp novel is fiction the other two are non-fiction. The novel is 348 pages long. The anthropology-diet book is 307 pages and this book is 328. But the amount of text per page varies quite a bit. There are about 36 lines per page in the novel and each line is about ten words. So we have 348 pages of text with about 360 words per page yielding about 125,000 words. In ‘ Catching Fire’ there are only about 28 lines of ten words each. At 307 pages this means only about 86,000 words. This book – ‘Superintelligence’ – has pages with 45 lines each of which is about 14 words each. That means this book has approximately 200,000 words. It is nearly twice as long as the Sharp novel and close to three times the length of ‘ Catching Fire’.So all other things being equal, this book gives you the best bargain. But it ends up being rather harder for my old eyes to read. The Sharpe novel is hard on the eyes in another way. It’s printed on yellowish paper in an infelicitous type face. This book is printed on good paper in a good font but it’s just too small. This much text should have been spread across at least another hundred pages in my opinion.As to content, there not much more anyone could ask on this topic. Bostrom is impressively erudite. He seems to be familiar with all of the relevant literature. I was deeply involved in an effort to bring Artificial Intelligence to public welfare eligibility processing about twenty or thirty years ago. This was with a ‘ Expert System’ . I had nearly forgotten Expert Systems, it was so long ago and such a disappointment. But Bostrom covers this obscure branch of AI along with all the other better known branches.Bostrom’s coverage is encyclopedic.

⭐Absent the emergence of some physical constraint which causes the exponential growth of computing power at constant cost to cease, some form of economic or societal collapse which brings an end to research and development of advanced computing hardware and software, or a decision, whether bottom-up or top-down, to deliberately relinquish such technologies, it is probable that within the 21st century there will emerge artificially-constructed systems which are more intelligent (measured in a variety of ways) than any human being who has ever lived and, given the superior ability of such systems to improve themselves, may rapidly advance to superiority over all human society taken as a whole. This “intelligence explosion” may occur in so short a time (seconds to hours) that human society will have no time to adapt to its presence or interfere with its emergence. This challenging and occasionally difficult book, written by a philosopher who has explored these issues in depth, argues that the emergence of superintelligence will pose the greatest human-caused existential threat to our species so far in its existence, and perhaps in all time.Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer’s yeast and humans. We’d better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.Because when we speak of the future, that future isn’t just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren’t, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it’s worth some reflection to get it right.As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let’s ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn’t hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won’t.As an engineer, I usually don’t have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don’t seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we’re faced with the emergence of an artificial intelligence, after which, if they’re wrong, it will be too late.That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there’s something special about a horse which can’t be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.

⭐This book should be read by anyone interested in AI development, application and the future of technology. Very thought provoking and should guide the evolution of this transformative tool that will shape our destiny.

⭐The first chapter is an interesting, concise history of AI. The following chapters, though… I have to say that if anything, Bostrom’s writing reminds me of theology. It’s not lacking in rigor or references. Bostrom seems highly intelligent and well-read. The problem (for me) is rather that the main premise he starts with is one that I find less than credible. Most of the book boils down to “Let’s assume that there exists a superintelligence that can basically do whatever it wants, within the limits of the laws of physics. With this assumption in place, let’s then explore what consequences this could have in areas X, Y, and Z.” The best Bostrom can muster in defense of his premise that superintelligence will (likely) be realized (sometime in the future) are the results of various surveys of AI researchers about when they think human-level AI and superintelligence will be achieved. These summaries don’t yield any specific answer as to when human-level AI will be attained (it’s not reported ), and Bostrom is evasive as to what his own view is. However, Bostrom seems to think, if you don’t commit to any particular timeline on this question, you can assume that at some point human-level AI will be attained. Now, once human-level AI is achieved, it’ll be but a short step to superintelligence, says Bostrom. His argument as to why this transition period should be short is not too convincing. We are basically told that the newly developed human-level AI will soon engineer itself (don’t ask exactly how) to be so smart that it can do stuff we can’t even begin to comprehend (don’t ask how we can know this), so there’s really no point in trying to think about it in much detail. The AI Lord works in mysterious ways! With these foundations laid down, Bostrom can then start his speculative tour-de-force that goes through various “existential risk” scenarios and the possibilities of preventing or mitigating them, the economics of AI/robot societies, and various ethical issues relating to AI. I found the chapters on risks and AI societies to be pure sci-fi with even less realism than “assume spherical cows”. The chapters on ethics and value acquistion did however contain some interesting discussion.All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don’t doubt Bostrom’s skills with probability calculations or formalizations, but the principle “garbage in – garbage out” applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn’t taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. “[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution” [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches (“We could postpone work on some of the eternal questions for a little while […] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors” [p. 315]), and ultimately claims that “reduction of existential risk” is humanity’s principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their “extreme end of the intelligence distribution” think this money would be better spent funding fellowships for philosophers and AI researchers working on the “control problem”. Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom’s publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!Despite the criticism I’ve given above, the book isn’t necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it’s not bad. But if you’re looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge – no magic allowed! – then this is definitely not the book for you.

⭐I bought this book after listening to Nick Bostrom on the Joe Rogan podcast. This book cuts through all of the artifical intelligence nonsense that people talk about and it gave me a much better understanding of the technology.I used to fear ai but now I know how far away we are from any real world dangers. Ai is still very early and there are some enormous obstacles to get past before we see real intelligence that beats the Turin test/imitation game every single time. Infact, some experts say that the Turin test is too easy and we need to come up with a better method to measure the abilities and limitations of an ai subject. I agree with that.Extreamly interesting read. Great book.

⭐This book goes well beyond its remit and stretches off into fanciful flights of whimsy far far into the future. This completely put me off. It’s also terribly written. As a long time popular science and philosophy reader this book was hard going, bored me to tears and was largely uninsightful.

⭐It was persistent recommendation through listening to Sam Harris’ fine podcasts that eventually convinced me to read this book.Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.Also a good understanding of economic theory would also help any reader.Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.

⭐A clear, compelling review of the state of the art, potential pitfalls and ways of approaching the immensely difficult task of maximising the chance that we’ll all enjoy the arrival of a superintelligence. An important book showcasing the work we collectively need to do BEFORE the fact. Given the enormity of what will likely be a one-time event, this is the position against which anyone involved in the development of AI must justify their approach, whether or not they are bound by the Official Secrets Act.The one area in which I feel Nick Bostrom’s sense of balance wavers is in extrapolating humanity’s galactic endowment into an unlimited and eternal capture of the universe’s bounty. As Robert Zubrin lays out in his book

⭐Entering Space: Creating a Space-Faring Civilization

⭐, it is highly unlikely that there are no interstellar species in the Milky Way: if/when we (or our AI offspring!) develop that far we will most likely join a club.

⭐The abolition of sadness

⭐, a recent novella by Walter Balerno is a tightly drawn, focused sci fi/whodunit showcasing exactly Nick Bostrom’s point. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end has already happened…

Keywords

Free Download Superintelligence: Paths, Dangers, Strategies in PDF format
Superintelligence: Paths, Dangers, Strategies PDF Free Download
Download Superintelligence: Paths, Dangers, Strategies 2014 PDF Free
Superintelligence: Paths, Dangers, Strategies 2014 PDF Free Download
Download Superintelligence: Paths, Dangers, Strategies PDF
Free Download Ebook Superintelligence: Paths, Dangers, Strategies

Previous articleFoundation Mathematics for Computer Science: A Visual Approach 2nd Edition by John Vince (PDF)
Next articleModern C++ Design: Generic Programming and Design Patterns Applied 1st Edition by Debbie Lafferty (PDF)