Ebook Info
- Published: 2017
- Number of pages: 384 pages
- Format: PDF
- File Size: 12.90 MB
- Authors: Max Tegmark
Description
New York Times Best SellerHow will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who’s helped mainstream research on how to keep AI beneficial. How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle? What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues—from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.
User’s Reviews
Editorial Reviews: Review “Original, accessible, and provocative….Tegmark successfully gives clarity to the many faces of AI, creating a highly readable book that complements The Second Machine Age’s economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence…. At one point, Tegmark quotes Emerson: ‘Life is a journey, not a destination.’ The same may be said of the book itself. Enjoy the ride, and you will come out the other end with a greater appreciation of where people might take technology and themselves in the years ahead.” —Science“This is a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness—on Earth and beyond.”—Elon Musk, Founder, CEO and CTO of SpaceX and co-founder and CEO of Tesla Motors“All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark’s thought-provoking book will help you join it.” —Professor Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology “Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” —Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity is Near and How to Create a Mind“Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down.” —Jaan Tallinn, co-founder of Skype “This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity.” —Bart Selman, Professor of Computer Science, Cornell University“The unprecedented power unleashed by artificial intelligence means the next decade could be humanity’s best—or worst. Tegmark has written the most insightful and just plain fun exploration of AI’s implications that I’ve ever read. If you haven’t been exposed to Tegmark’s joyful mind yet, you’re in for a huge treat.”—Professor Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age“Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers—AI, cosmology, values, even the nature of conscious experience—can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions.” —Nick Bostrom, Founder of Oxford’s Future of Humanity Institute, author of Superintelligence”I was riveted by this book. The transformational consequences of AI may soon be upon us—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds.” —Professor Martin Rees, Astronomer Royal, cosmology pioneer, author of Our Final Hour “In [Tegmark’s] magnificent brain, each fact or idea appears to slip neatly into its appointed place like another little silver globe in an orrery the size of the universe. There are spaces for Kant, Cold War history and Dostoyevsky, for the behaviour of subatomic particles and the neuroscience of consciousness….Tegmark describes the present, near-future and distant possibilities of AI through a series of highly original thought experiments….Tegmark is not personally wedded to any of these ideas. He asks only that his readers make up their own minds. In the meantime, he has forged a remarkable consensus on the need for AI researchers to work on the mind-bogglingly complex task of building digital chains that are strong and durable enough to hold a superintelligent machine to our bidding….This is a rich and visionary book and everyone should read it.” —The Times (UK)”Life 3.0 is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required.” —Stuart Russell, Nature “Lucid and engaging, it has much to offer the general reader. Mr. Tegmark’s explanation of how electronic circuitry–or a human brain–could produce something as evanescent and immaterial as thought is both elegant and enlightening. But the idea that machine-based superintelligence could somehow run amok is fiercely resisted by many computer scientists….Yet the notion enjoys more credence today than a few years ago, partly thanks to Mr. Tegmark.” —Wall Street Journal “Tegmark’s book, along with Nick Bostrom’s Superintelligence, stands out among the current books about our possible AI futures….Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too.” —The Telegraph (UK)“Exhilarating….MIT physicist Tegmark surveys advances in artificial intelligence such as self-driving cars and Jeopardy-winning software, but focuses on the looming prospect of “recursive self-improvement”—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark’s smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons….Engrossing.” —Publishers Weekly About the Author MAX TEGMARK is an MIT professor who has authored more than 200 technical papers on topics from cosmology to artificial intelligence. As president of the Future of Life Institute, he worked with Elon Musk to launch the first-ever grants program for AI safety research. He has been featured in dozens of science documentaries. His passion for ideas, adventure, and entrepreneurship is infectious. Excerpt. © Reprinted by permission. All rights reserved. THE THREE STAGES OF LIFE The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information. In other words, we can think of life as a self-replicating information processing system whose information (software) determines both its behavior and the blueprints for its hardware. Like our universe itself, life gradually grew more complex and interesting, and as I’ll now explain, I find it helpful to classify life forms into three levels of sophistication: Life 1.0, 2.0 and 3.0. It’s still an open question how, when and where life first appeared in our universe, but there is strong evidence that, here on Earth, life first appeared about 4 billion years ago. Before long, our planet was teeming with a diverse panoply of life forms. The most successful ones, which soon outcompeted the rest, were able to react to their environment in some way. Specifically, they were what computer scientists call “intelligent agents”: entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment. This can include highly complex information-processing, such as when you use information from our eyes and ears to decide what to say in a conversation. But it can also involve hardware and software that’s quite simple. For example, many bacteria have a sensor measuring the sugar concentration in the liquid around them and can swim using propeller-shaped structures called flagella. The hardware linking the sensor to the flagella might implement the following simple but useful algorithm: “If my sugar concentration sensor reports a lower value than a couple of seconds ago, then reverse the rotation of my flagella so that I change direction.” Whereas you’ve learned how to speak and countless other skills, bacteria aren’t great learners. Their DNA specifies not only the design of their hardware, such as sugar sensors and flagella, but also the design of their software. They never learn to swim toward sugar; instead, that algorithm was hard-coded into their DNA from the start. There was of course a learning process of sorts, but it didn’t take place during the lifetime of that particular bacterium. Rather, it occurred during the preceding evolution of that species of bacteria, through a slow trial-and-error process spanning many generations, where natural selection favored those random DNA mutations that improved sugar consumption. Some of these mutations helped by improving the design of flagella and other hardware, while other mutations improved the bacterial information processing system that implements the sugar-finding algorithm and other software. Such bacteria are an example of what I’ll call “Life 1.0”: life where both the hardware and software is evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes. You weren’t able to perform any of those tasks when you were born, so all this software got programmed into your brain later through the process we call learning. Whereas your childhood curriculum is largely designed by your family and teachers, who decide what you should learn, you gradually gain more power to design your own software. Perhaps your school allows you to select a foreign language: do you want to install a software module into your brain that enables you to speak French, or one that enables you to speak Spanish? Do you want to learn to play tennis or chess? Do you want to study to become a chef, a lawyer or a pharmacist? Do you want to learn more about artificial intelligence (AI) and the future of life by reading a book about it? This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0. High intelligence requires both lots of hardware (made of atoms) and lots of software (made of bits). The fact that most of our human hardware is added after birth (through growth) is useful, since our ultimate size isn’t limited by the width of our mom’s birth canal. In the same way, the fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn’t limited by how much information can be transmitted to us at conception via our DNA, 1.0-style. I weigh about 25 times more than when I was born, and the synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with. Your synapses store all your knowledge and skills as roughly 100 terabytes worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been pre-loaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity. The ability to design its software enables Life 2.0 to be not only smarter than Life 1.0, but also more flexible. If the environment changes, 1.0 can only adapt by slowly evolving over many generations. 2.0, on the other hand, can adapt almost instantly, via a software update. For example, bacteria frequently encountering antibiotics may evolve drug resistance over many generations, but an individual bacterium won’t change its behavior at all, while a girl learning that she has a peanut allergy will immediately change her behavior to start avoiding peanuts. This flexibility gives Life 2.0 an even greater edge at the population level: even though the information in our human DNA hasn’t evolved dramatically over the past 50,000 years, the information collectively stored in our brains, books and computers has exploded. By installing a software module enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains, potentially surviving even after the original brain died. By installing a software module enabling us to read and write, we became able to store and share vastly more information than people could memorize. By developing brain-software capable of producing technology (i.e., by studying science and engineering), we enabled much of the world’s information to be accessed by many of the world’s humans with just a few clicks. This flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next: language, writing, the printing press, modern science, computers, the internet, etc. This ever-faster cultural evolution of our shared software has emerged as the dominant force shaping our human future, rendering our glacially slow biological evolution almost irrelevant. Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years, enabling our universe to finally fulfill its potential and wake up fully. All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles. The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or getting a thousand times bigger brains. In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself: · Life 1.0 (biological stage): evolves its hardware and software · Life 2.0 (cultural stage): evolves its hardware, designs much of its software · Life 3.0 (technological stage): designs its hardware and software After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many artificial AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book. Read more
Reviews from Amazon users which were colected at the time this book was published on the website:
⭐Life 3.0Max Tegmark enthusiastically and excitedly writes about what life will be like for us humans with the rise in AI (Artificial Intelligence), AGI (Artificial General Intelligence – Intelligence on par with humans) and the possibility/probability of creating Super-Intelligence (AI enabled intelligence that far surpasses human intelligence and capabilities.). He asks the reader to critically engage with him in imagining scenarios of what such AI reality could mean for us and to respond on his Age of AI website.The book begins with the Tale of the Omega Team, a group of humans who decide to release advanced AI, named Prometheus, surreptitiously and in a controlled way into human society. The tale unfolds as a world take-over by Prometheus and in a final triumph becomes the world’s first single power able to enable life to flourish for billions of years on Earth and to be spread throughout the cosmos.If you have never read much post-modern futurology, Tegmark is a good way to take the plunge. He brings together much of the thinking about what humanity will have to deal with, the decisions it will have to make and the options it might have with the inevitable advancement of technology and specifically AI. Above all he encourages the reader to believe that she/he has an important role to play in what the future will hold for us and that we need not, indeed cannot, succumb to fatalism. The most commendable, concrete and hopeful part of the book is in his story of AI researchers coming to agreement about the path forward for AI that is pro-active in addressing the challenges it presents and the impact it will have on human society. The end of the book lays out this path in the Asilomar AI Principles, which were created, critiqued, refined and agreed through a process initiated in an AI conference in Puerto Rico in January 2015. The takeaway for Tegmark is that AI research can now confidently go forward with the knowledge that impacts and consequences for humanity have been and will be addressed in the process to mitigate any negatives. He and his colleagues deserve credit for such engagement and thoughtful commitment in their endeavors.For the above I gave the book four stars. The book is also fun to read and challenging to our common political and economic realities. There are, however, areas of concern that are either untouched or passed over lightly, to which I now turn:1. The quest for truth – Tegmark assumes that we have an “excellent framework for our truth quest: the scientific method.” I start my critique here because this assumption is not argued nor established. There is no argument against the formidable power of scientific methodology to give deep explanation to natural reality. However, the issue of truth is rightly not the purview of science, but of philosophy. This may seem nit-picky, but we are too used to the idea that science is the absolute arbiter of truth as though it can offer a complete picture of reality, when in fact that’s not within its job description.2. The way Tegmark frames his definition of life is a case in point. To do this he makes two moves: first, using the scientific method he deconstructs life in a reductionist move; the second move is to decenter biotic, human life in its importance and necessity in the unfolding of what he calls Life 3.0. Tegmark’s first move reduces the definition of life to “a process that can retain its complexity and replicate itself.” In this highly generalized definition he can than reduce life further to atoms arranged in a pattern that contains information.This broad definition is important for the second move which is the decentering of biotic human life. Here he offers a post-modern notion that human life (anthropocentric) can no longer be the measure of all things. Humans have been displaced from the center of the universe in great steps since Copernicus. If we are going to promote Life 3.0, we must continue this decentering to make room for the expanded definition of life he offers. Life must now be imagined as other than biotic. It must include the possibilities imagined by our new technologies of superintelligence housed in robust substrates where human consciousness or even non-human consciousness can reside for great lengths of time and go beyond earth to the reaches of the universe. If it sounds utopian, there is that clear melody line in Tegmark’s writing, in spite of some protestations to the contrary.This is Tegmark’s book. He can define life however he sees fit. From my perspective life was the good old fashioned, highly unlikely emergence of biotic generativity – the beginning of which we yet do not know. Evolution did its trial and error number over four billion years to produce humans. If and when there is ever the need to call something non-biotic, life, it will be apparent at that moment and not before. This does not mean that preparation for AI is not needed. It is that sapience is not sentience nor does intelligence to some superhuman degree make something life even if it can mimic or surpass human neurology. Call it what it is: a really smart human-made machine that is programed to learn, replicate, maybe have what we call consciousness and cause us all kinds of grief and gladness. Life? No.3. It is good that Tegmark wades into the arena of ethics because they cry out for attention.• First, can anyone actually account for or quantify/qualify accurately for human behavior? History has yet to convince us that humans, whether naturally tending toward the moral or not, cannot be morally controlled. The scientific evidence is in our history. And yes, there are many heroes, but there are many who are classified “evil.” One need only to look at the current “fade” of mass shootings in the USA. We may blame mentally unstable people for this, but we are those people. Tegmark points out that AI is morally neutral and like guns is not the evil element in the equation. But AI is initially and therefore ultimately a human endeavor and therefore is imbued with human imitation and limits. As good and needed an attempt that is made with the Asilomar AI Principles, we can be sure that AI will be used wrongly and perhaps fatally to all of life. Our certainty is because we know ourselves as humans. We are a product of Nature which models the whole spectrum of behaviors from the deeply violent to the deeply loving. More species of life on earth have gone extinct than are alive today. Dare we think that humans might escape a similar fate because we are intelligent or have benign superintelligent buddies? Before anything else can be discussed regarding the deep future of humanity, humanity itself has to come to grips with itself. Though Tegmark rhetorically acknowledges such negative possibilities, he is full steam ahead in his assumptions and commitment to the development of superintelligence.• Second, in our modern world moral absolutes are hard to come by. In a purely naturalistic setting all morality is relative and therefore depends upon the decision of humans within a cultural setting within the personal psyches of the individuals making moral choices. It is not cynical to believe that if you scratch a beautiful public moral persona, you will get it to bleed a bewildering moral anomaly. Look at how many moral quibbles some of the scientists who were involved in developing atomic/nuclear weaponry had. When threatened, it seems “all options are on the table.” For all the good of Tegmark’s intentions this is a very uncertain area. Even his examples of several Russian men, who prevented nuclear holocaust, are frightening enough for us to understand just how serious the moment in which we live is morally. So, the question is: do we have a sufficient moral foundation and will to unleash AI invention and use?•Third, in spite of trying to move away from human-centeredness rhetorically throughout his book, Tegmark does no better than anyone else when he, in the end, does not do so. In fact it is likely that humans will never be able to decenter themselves because all our concepts, heuristic overlays, thought processes, bodily constraints and needs make it impossible. At any rate, Tegmark, without great explanation or justification joins others in believing that humans must spread their life and intelligence throughout as much of the universe as possible – in order to unleash its potential! That very idea is human-centered: colonialist, exploitative, presumptive and perhaps idolatrous. In a universe where life is located only on our planet, as far as we know for sure, why do we think life, our life, should interrupt that immense time/space with our angst? Do we think our machines will overcome human moral ambivalence? Why inflict our unfinished project on earth to more territory? Why not make a moral stand to address earth and human issues so that until we have reached a greater potential morally, spiritually, intellectually, materially and relationally, we stay here and make sure our AI does too? Talk about a utopian dream! The point is that morally there is no good argument for taking human life and issues elsewhere, especially because that means unleashing the whole spectrum of human experience.Fourth, though the book’s subtitle is “Being Human in an Age of Artificial Intelligence,” Tegmark does not address to any depth what happens to or even if humanity can last in the face of superintelligence. This is even with the assumption that AI will be good for humans. Human and AI life forms are critically different from each other. Though there might be some compatibility between the two, AI is more like the rocks and electrical switches than it is to humans. The human biotic substrate of our existence is in comparison, obsolete. The issues this raises cannot be put aside cavalierly with the technological move of uploading our humanity into a more robust substrate. Humanity by definition is biotic. If one cannot accept Tegmark’s generous new definition of life it means humans will be decentered in a devastating way.4. One last thing needs mention, Tegmark’s use of the words “pessimistic” and “optimistic” in regard to the future path that AI will take. Both these words are unscientific. They describe a general psychological intuition or feeling about something based on a foundation that seems solid or not. To use such words in the context of AI value and possible future effects on humanity is misplaced. Better to stick with more concrete descriptions. One can say the same thing about Tegmark and his colleagues regarding their enthusiasm for technological future wonderments. History again has to keep us grounded. Who would have thought (no one obviously did) at the beginning of the Industrial Revolution that its descendants would be threatened within a degree or two their lives because of the burning of plentiful fossil fuel? Whatever plans are put forth to mitigate the impact of humans messing around with nature, we can be assured that we will always miscalculate and create unintended consequences. Explorers, explore, but beware!
⭐Great book. Everyone over 16 should read this book
⭐Learned more about living with AI.
⭐I ordered this book to see if would help me understand all the recent public excitement about Artificial Intelligence AI and apparent praise by Elon Musk and Stephen Hawking! and also recognizing the author – now a physics professor at MIT (who I recall as a berkeley grad student consulting from others as much as I did on our Physics 221 quantum mechanics homework:-) . Max Tegmark – also the head of the modestly named Future of Life Institute deploys a chatty style to convey the concepts on some deep topics, including Consciousness and, of course, God and has plenty of cute anecdotes and name dropping – he lets you know that he has met pretty much everyone famous – Larry Page, Kissinger ( he did not seem to see any irony in asking the architect of the bombing of Cambodia about reducing the danger of biological weapons. And never too observant of male beauty, I am grateful that he made clear that Elon, ew, musk is ‘handsome’) but in the end did not talk about any of the existential threats (climate change ?) I thought we should care about and did not credibly explain to this human on what to get excited (or not) about in the coming age of super machines. Tegmark begins by repeating the old question’Can Machines take over, and what to expect if they do?’ – most notably meditated on by Arthur Clarke’s 2001 A Space Odyssey, written before I was born, but which remains to me the most intelligible speculation on the outcome of this yet fictional contest : In short, Astronaut Dave isolated against the HAL 9000 trying to take over the interplanetary spaceship, sets up a new trap, a novel ruse not foreseen by the computer to shutdown the malevolent machine. The lesson there to me was that although machines computers will always be better at routine goals, defined tasks – yes, yes, win at Jeopardy, categorize Xrays, Win at chess and super complicated Asian Go – but only ONLY humans can develop new hypotheses, imagine a new narrative, create new information. What defines humanity is not the tool making, not the computing, not the ‘ doing complex tasks’ that Tegmark defines as intelligence, not even the ‘subjective experience’ that he defines as consciousness, but something else that the author glosses over, or does not talk about at all. What makes my 6 year old more remarkable than the worlds most amazing computing machine is the : ability to create NEW stories, new narratives. While Tegmark goes on with a semi-fictional story about a company called Omega which builds a super-intelligent computer that takes over Amazon and then takes over the world – he seems to have completely missed the significance of the biggest event of our time, that neither man nor machine could predict that Donald Trump would become the most powerful man on Earth (for whatever that is worth these days) voted in to that position by 63 MILLION humans, not machines. In particular, we learn that Alt-right fanatics were able to game Facebook just as Russian KGB goons could hire Ukrainian hackers on the dark Web to put out fake news about Hillary. Humans!, and bots doing their bidding, tipped the balance of power in the most powerful nation on history. And later this year, Larry Page and Zuckerberg, Facebook and Google, are complaining to investors and Congress that they have to hire more humans, not machines!, to be able to weed out fake news and their friends. i.e. Machines cannot yet pass what I would call a reverse Turing test – they cant tell fake news from the real sort. Perhaps in a nod to his friend Stephen Hawking, Tegmark goes on about cosmology – (you are supposed to get worried about the Sun engulfing the Earth 7,500,000,000 years from now, especially since modern humans around only for less than 1/10,000th that time) and information falling irretrievably into a black hole, he seems to give short shrift to the simple yet seminal ideas of Claude Shannon, who as the ‘father of information theory’ first quantified it as the negative logarithm of the probability that a hypothesis, a story, will be proved true. Any machine can run the Bayesian graph and generate a probable hypothesis – the sun will rise tomorrow at dawn, for instance, but only a human, even an idiot can imagine the IMPROBABLE hypothesis, the unlikely story. And ultimately that is what humans can do, even the dumb ones, which machines cannot. To emphasize this, let me point out that supercomputer in my pocket – my iPhone – can correct spellings , but cannot even complete a sentence , much less imagine a story. Machines are great at working at goals given to them, and Tegmark spends too much time wondering if they will ultimately ‘align to our goals’ ; even if we imagine (for a drunken millisecond) that all humanity can have the same set of goals, rephrasing the Dostoyevsky quote – the purpose of life is not surviving it but realizing what it is all about – these goals are hardly defined in time or space. It is not about the ‘Really Hard Problem’ of machine consciousness – sensing the redness of the rose, but learning enough about the world for each human to be able to write her own story, unique in its triumphs, unique in its tragedies but where she finally emerges as heroine, not victim of those circumstances. We are all learning to imagine our place in the world, to realise why we are relevant, and although machines will always help us formulate our story, our stories will be our own, We will ride alone into wilderness of the future, with our computers staying firmly in our pockets.
⭐I’m sure this book is well written and given the number of positive reviews, has credibilityHowever I really didn’t enjoy it and ended up skimming large chunks of it to the synopsis at each chapter. I bought I having become intrigued by AI after watching the Go documentary on Netflix and wanted to find out a bit more about the subject. This book doesn’t really do that (apart from the first few chapters) but is more of a societal analysis of potential dystopian effects of AI, which reads like bad sci-fi and has very little depthGood book but I’ll stick to the google AI blog
⭐This is a book that has received many accolades. President Obama recommended it in one of his “lists” (that of 2017), Elon Musk declared it one of the most important books on business (maybe because he – Mr Musk – is in it, portraited quite favorably) and it has on its cover the mandatory “New York Times bestselling book”. And yet, after reading it, this apparent consensus doesn’t seem fully deserved.The book it’s written passionately and its author obviously knows what he’s saying. But it’s too diverse, it tries to cover many grounds, only to appear in the end with a clear focus, or purpose. The introduction, with the narration of the so-called “Omega Men”, seems almost childish – it’s too cinematographical, with many references to James Cameron. It is confusing, and too obvious, plagued with clichés.Then the book never finds a clear point. It goes from statistics to current affairs, then from science conferences full of illustrious names in which Mr Tegmark participates, to academic physics; and then also to history, religion, etc… It never does settle. At some points, it is almost messy, with crossed references. It is a time popular (way too popular) and at times inextricable and difficult to read. Perhaps foreseeing this, the author includes charts with conclusions at the end of every chapter, but these do not help – they are sometimes redundant and almost always confusing.The AI is an obvious “hot” topic, and how it is handled in the short and medium term will help with (or else increase), many of the problems humanity is facing. It has, deservedly, generated a vast bibliography. We, laypeople, need good introductions to the topic. Unfortunately, “Life 3.0” is not one of those.
⭐Life 3.0 poses an interesting question: What happens when humans are no longer the smartest species on the planet? Tegmark has written a compelling challenge analysis of the choices facing us as we create ever more powerful AI super-computers; will they usher in a new era – or will they replace us? This is a tale about our own future with AI.Tegmark covers concepts from computing to cosmology with extraordinary clarity, whilst reminding us that many of these ideas were created by science fiction writers more than 50 years ago. And throughout he asks us to consider how we want AI to impact on our lives, jobs, laws and weapons. How will we live with a greater intelligence than our own, of our own creation?He doesn’t offer any simple answers to the challenge, but instead sets the reader thinking about what kind of future we would want to create. He does this in an insightful, unintimidating way that invites you to come to your own conclusions.Life 3.0 is an exciting, accessible read that has helped me think anew about the future in a world with artificial intelligence. Will it be Utopia or a catastrophe?
⭐The future of intelligent life considered in an entirely humanistic paradigm. The meaning of the universe? None unless there’s an intelligent being – either human or created by humans. No possibility of religious meaning is entertained here. So this book is completely divorced from thousands of years of human culture and understanding. The AI that we end up creating is going to outstrip not just the speed of human thought (that already happened) but the flexibility and imagination of human thought by recursive self-modification. Its claim on the biosphere will likely displace human claims on the same resources. The implications for the future of humanity both startling and horrific. Since it will outstrip human intelligence any limitation placed on its goals is unlikely to apply for long. And since rival military powers likely wish assistance of recursive intelligence to assist their programs, any optimism on the author’s part appears misplaced. Assuming of course that humans are alone in the universe without God to oversee future events. This possibility isn’t considered.
⭐Artificial Intelligence is all around us in its many shapes and forms. Alan Turing was one who set us thinking about the meaning of “intelligence” and what it means to be “intelligent”. Can a machine be intelligent? He created the Imitation Game as a way of thinking about the whole topics.Tegmark poses similar questions but in a world of super computers, computers far in advance of anything Turing could have imagined. Will our future see computers and robots more intelligent than humans, their creators? EM Forster wrote a short story on similar lines; John Whyndham did the same.Tegmark Poses similar questions and problems and gives readers some seriously challenging questions to answer. As we continue to create computers with the capacity to learn, will they outlearn their creators, will humans be able to control them or will computers gradually overtake humans as an inferior species?Although a little repetitive in places, nevertheless, it is very thought-provoking book. Recommended.
Keywords
Free Download Life 3.0: Being Human in the Age of Artificial Intelligence in PDF format
Life 3.0: Being Human in the Age of Artificial Intelligence PDF Free Download
Download Life 3.0: Being Human in the Age of Artificial Intelligence 2017 PDF Free
Life 3.0: Being Human in the Age of Artificial Intelligence 2017 PDF Free Download
Download Life 3.0: Being Human in the Age of Artificial Intelligence PDF
Free Download Ebook Life 3.0: Being Human in the Age of Artificial Intelligence