Ebook Info
- Published:
- Number of pages:
- Format: PDF
- File Size: 2.06 MB
- Authors: Harry Collins
Description
Recent startling successes in machine intelligence using a technique called ‘deep learning’ seem to blur the line between human and machine as never before. Are computers on the cusp of becoming so intelligent that they will render humans obsolete? Harry Collins argues we are getting ahead of ourselves, caught up in images of a fantastical future dreamt up in fictional portrayals. The greater present danger is that we lose sight of the very real limitations of artificial intelligence and readily enslave ourselves to stupid computers: the ‘Surrender’. By dissecting the intricacies of language use and meaning, Collins shows how far we have to go before we cannot distinguish between the social understanding of humans and computers. When the stakes are so high, we need to set the bar higher: to rethink ‘intelligence’ and recognize its inherent social basis. Only if machine learning succeeds on this count can we congratulate ourselves on having produced artificial intelligence.
User’s Reviews
Reviews from Amazon users which were colected at the time this book was published on the website:
⭐The title was intriguing but the book was not quite what my husband expected. Instead of it addressing the topic of artificial intelligence in fiction as he thought, it was more of an academic treatise on the topic of artificial intelligence in real life and how limited it is. Sadly the book did not hold his interest.
⭐This book does a good job of analyzing the moral and ethical issues surrounding AI, including whether functional AI is achievable or even advisable. It was a bit of a slow read for me, focusing on the issues, not from a technical standpoint, but more in terms of the bigger picture issues.
⭐Harry Collins is one of the foremost sociologists of computing, an incisive thinker whose earlier book
⭐I have found myself recommending constantly, for decades now, as the most thoughtful social-theoretical and philosophical treatment of, among other things, what exactly “the Turing Test” is, and what it tests. Collins’s expertise in the sociology of scientific knowledge and his early interest in “artificial intelligence” led him to a unique viewpoint from which to observe as the social status of computers changed over the last few decades. This new book, though, is less of a surprise than one might expect considering his early work; instead, it’s an accessible layman’s treatment of some of his longstanding contentions, framed around the (familiar) argument that true “intelligence” isn’t possible without embedding in the everyday world of human social relationships and interactions, so that the rise of “deep learning” algorithms means less for the old claims of strong AI than one might think. It’s an engaging, sometimes polemical but mostly very judicious, book; Collins spends a lot of time on accessible examples and thought experiments, and so I sometimes wished for a more rigorous treatment of questions from the philosophy of language or from social theory. But for a beginner to sociological theory — perhaps for the computer scientists, or CS students, you know? — this would certainly be a great place to start thinking about how to argue questions like “how can a computer be said to be intelligent?”
⭐Interesting book on Artificial intelligence. Will robots take over someday?
⭐Das Buch ist sehr interessant!
⭐Not found.
⭐This is a long review but I will summarise it in one sentence. The book presents one thesis which is rigorously argued, then deceptively takes us into territory outside the author’s expertise where its arguments become fallacious and prejudiced.The serious thesis of the book, loosely expressed, is that making computers as intelligent as people is more difficult than is typically appreciated and will (or would, depending on your point of view) require a huge leap beyond the present state of AI. More precisely he says that intelligent computers will need to be embedded in society. He devotes considerable discussion to exactly what it means to be embedded in society, and this is where the book is at its best. He is a sociologist and when he goes into its niceties such as the debate over whether his subject is objective or subjective, his rigour and logic shines.Unfortunately, the ideas in the book that are rigorously expressed are not very interesting or new, and the ideas that are interesting and challenging are poorly founded. The trouble comes when he tries to apply his expertise to aspects of AI in which he is not expert. I use the word “aspects” deliberately because it would be wrong to say he is not some kind of expert in AI. He has enjoyed personal communication with many of the leading figures in the field and clearly has a reasonable grasp of the things they have said to him. Nevertheless, he overreaches himself and steps beyond the boundaries of the professed thesis of the book, freely abandoning his rigour at crucial moments for something that smells more like anti-AI prejudice.To make my case properly I would have to write something approaching the length of the book itself and unfortunately most people only allow a minute to read a review. Still, I will do my best to explain the issues with a balance of thoroughness and brevity.Most people in the field are fully aware of his thesis, and acknowledge it. Yes, they do make claims about what future advances may achieve, and perhaps they make loose claims about what has already been achieved in order to support their predictions. (I suspect sociologists are no closer to paragons of modesty when it comes to presenting the virtues of their discipline). But this is not where the dispute lies.The book devotes a huge amount of discussion to definitions of intelligence and whether AI meets or will meet his various definitions. It would seem more sensible to consider instead whether AI will in the near future make the enormous strides some people predict, rather than whether it will make precisely the right strides to justify a particular description. He does touch on the question of what AI might achieve but is way outside his field and unable to contribute much.Instead he proceeds to an extraordinary fallacy. He argues that computers achieve a level of intelligence that doesn’t quite fit the sociologists’ definition of intelligence, therefore they are stupid. Therefore they are dangerously stupid. I would suggest that if we are worried about computers being dangerously stupid we shouldn’t be going into minutiae of the definition of intelligence, instead we should be discussing what computers do, how they do it and how effectively their behaviour can be controlled. The book has nothing to contribute to this.One of his points is wholly valid – that the power of deep learning conceals important limitations. Unfortunately, this is well known and the book doesn’t offer any new insight. There is already a great deal of useful discussion about the limitations of deep learning, and what alternatives may be needed, among people who actually understand AI.I will go into his arguments about the Chinese Room, not because they are especially important but because the Chinese Room is an old favourite, and because it is where I noticed the first signs of weakness. His angle on the Chinese room question is different from the usual. He concentrates on how extraordinarily difficult it would for the Chinese Room to pass a proper Turing Test. He makes a great deal of his point that to pass the test, it would need to be able to correct spelling mistakes. Well, of course this is right but so what? Everybody knows it would be extraordinarily difficult to pass a properly constructed Turing test. Getting a computer to converse like a human would require giving it abilities way beyond the ability to correct spelling mistakes.I grant that the Chinese Room containing a huge lookup table as postulated by Searle wouldn’t have the ability to correct spelling mistakes, but that is irrelevant since nobody considers Searle’s version a workable or appropriate way to pass the Turing test. Collins shows he doesn’t really understand the task facing AI when he apparently thinks the point about spelling mistakes brings something relevant to the Chinese Room discussion.Worse than that, the justification he gives is that the number of possible spelling errors is infinite, whereas (he says) the number of possible sentences is finite. This is so wrong it seems he has no idea what he is talking about. It doesn’t take much thought to realize that the number of possible spelling mistakes is finite. It is bounded by the alphabet, which is finite, and by word length which is also finite. I do not believe there is a word in any language a million letters long, nor do I believe the brain could handle one. Therefore the vocabulary and consequently the number of spelling mistakes is finite. On the other hand the number of possible sentences is infinite since there is no constraint on the length of a sentence. (Though of course in practice the finite life of a human places a bound on the length of a sentence that could be processed). How could he pretend to be so rigorous while allowing such sloppy reasoning? It is true that the number of sentences including spelling mistakes is bigger than the number without spelling mistakes, but by fixing on that he merely reveals the paucity of his understanding. The number of correctly spelled sentences is already so vast, an extra dimension of vastness is of no consequence.He then suggests recasting the thought experiment whereby only one specific question may be asked, so the lookup table only contains one answer. He uses this example to argue that a system may look intelligent and yet not be intelligent, apparently forgetting that a system that can only answer one question won’t look remotely intelligent.Overall, I find this a most disappointing and deceptive book from an author who should have produced something much better.
⭐Harry Collins is a sociologist and Distinguished Research Professor at Cardiff University’s School of Social Sciences. This thought-provoking book takes a deep dive into what we mean by ‘intelligence’ and what it takes to pass the Turing Test, arguing that despite extraordinary developments in artificial intelligence, the Singularity is not at hand but we are in danger of fooling ourselves that it is and thus surrendering to ‘stupid’ machines.I picked this up after reading the recent news reports about an ex Google engineer who claimed that a Google AI had achieved sentience. I’ve been reading about AI off-and-on for the past few years as it’s a subject that I find interesting (even though I don’t have the scientific/engineering background to understand all of it).Collins comes at this from s sociological perspective (which is something I do have some familiarity with) and as someone who was deeply embedded within the physicist community exploring gravitational waves (to the point that he passed a Turing Test on it such that other physicists thought he was an expert in it) he is able to discuss what we mean by language and knowledge. His basic arguments are as follows:- no computer will be fluent in a natural language, pass a severe Turing Test and have full human-like intelligence unless it is fully embedded in normal human society; and- no computer will be fully embedded in human society as a result of incremental progress based on current techniques.Essentially his point is that computers are not sufficiently socialised into society to be able to pass themselves off as human. They do not “understand” in a way that people do but instead are trained in pattern recognition that doesn’t work when confronted with the contextualisation and natural corrections that humans make when communicating with each other. Collins runs through what he means by this, taking into account claims made by technologists such as Kurzweil about consciousness and to explain why he believes that AI is not analogous to the human brain.Collins also warns about how the way in which humans are enamoured with the vast progress in machine-learning also makes us susceptible to falling prey to what are actually quite ‘stupid’ computers that we think are solving difficult problems when in fact they haven’t done so at all. His view is that this is a particular danger because engineers have not resolved the issue of social context and socialisation with the result that the AI is not sufficiently embedded within society to fully understand it and can only become accustomed to it as what it considers to be mistakes are corrected on a case-by-case basis (rather than understanding from the off that they are not necessarily mistakes). What’s important is that Collins is not saying that this is impossible completely, just that it’s highly unlikely given where the technology currently is.In constructing his arguments, Collins runs through what impossibility claims are and what Ai consists of and where it currently is (bearing in mine this book was published in 2019). He then goes on to look at language and how we ‘repair’ misspellings and mistakes in order to make sense of sentences, which is something at AI cannot possibly do. He moves on to examine what we mean by “context” and how humans make sense of context when they communicate and then goes on to look at imitation games, how the Turing Test works and how you can build a strenuous one to stress test at what stage AI is at and where it might go. I thought these sections were particularly interesting because prior to reading this I didn’t know that there were different strata of Turing Test and what each represented and Collins draws on his own experience of being embedded within the gravitational wave group of physicists, which really works well to draw out his points. Also interesting is how Collins takes into account AI programmes such as those that won at Go, Deep Blue and the programme that won at Jeopardy and how he demonstrates that clever those these machines are, they’re not proof that the Singularity is within reach.All in all, I found this to be a really interesting read that furthered my own understanding of what AI is, where its deficiencies are and what the possibilities are going forward. If you have an interest in AI and machine learning then I definitely think that it’s worth a look.
⭐NOTE: I have a PhD in Artificial Intelligence (GOFAI – Genetic Algorithms)”The Winter Is Coming”, and so it (most likely) is.AI has a seasonal Winter every 10 years or so, as the great predictions for that generation’s technology fades away.I grew up with the Genetic Algorithms of the early GOFAI, and sadly watched that fade away, and now I agree with the Author of this well written book, that the hype surrounding the latest version (Deep Learning) is also bound to fade.As I have mentioned before the “Deep Learning” concept is based on HUGE databases being searched by INCREDIBLY FAST processors to do Pattern Matching – whether it be handwriting recognition, machine translation, face recognition or whatever – it is what I would call SMART (i.e. can do stuff much much quicker than a human) but is not INTELLIGENT as it has no idea of what it is actually doing.There are some lovely examples in this book of how one could “spoof” these search algorithms by, and this is my example, putting up loads and loads of text on the web, where you translate some specific Hungarian into “My Hovercraft is Full of Eels”… and SMART but DIM machines will find all these records and their scoring mechanism will give these a high probability of being the right answer.The main worry (from my perspective) and echoed by this Author, is that the hype surrounding Deep Learning AI will make us believe that the machines ARE intelligent instead of smart – in that they understand what they are doing.Suppose you train a SMART missile to recognise hostile targets by the red symbol on their uniforms, but forget to train it that the ones with red crosses are the Good Guys… the machine has NO concept of thinking outside its own little box… and disaster could occur. We’ve heard of driverless cars crashing because their “training” forgot that there might be white lorries looking like the sky.Anyway – please read this and accept that we have some incredibly SMART and USEFUL machines, but they are NOT intelligent. At least not for a long while, and certainly not using Deep Learning (unless I am completely wrong and the Human Being’s brain is purely pattern recognition… but I am not qualified to comments 🙂
⭐It seems to be a popular trend at the moment to continually quote works of fiction to shore up your line of argument. So. Deep breath. There is no point in describing fictional rogue computing systems to prove your point that artificial intelligence is scary, because they are fictional. A computer is only as good as the human programmer who creates the software to run on it. While I love my robot vacuum cleaner, washing machine, breadmaker and dishwasher with all their fuzzy fuzzy logic calculating temperatures, dirt particles, machine load et al, they aren’t going to take over the world, because the human has to spend considerable time maintaining them. In the same way the great infrastructure of utilities needs all its databases and servers upgrading before they go crishy crashy. Human failings reassuringly break rogue computers. Oh and spending an entire chapter wailing that you want to spell weird as wierd and the computer (programmer) won’t let you is beyond my patience….just add it to your dictionay in your word processor!
Keywords
Free Download Artifictional Intelligence: Against Humanity’s Surrender to Computers 1st Edition in PDF format
Artifictional Intelligence: Against Humanity’s Surrender to Computers 1st Edition PDF Free Download
Download Artifictional Intelligence: Against Humanity’s Surrender to Computers 1st Edition PDF Free
Artifictional Intelligence: Against Humanity’s Surrender to Computers 1st Edition PDF Free Download
Download Artifictional Intelligence: Against Humanity’s Surrender to Computers 1st Edition PDF
Free Download Ebook Artifictional Intelligence: Against Humanity’s Surrender to Computers 1st Edition