web site hit counter Our Final Invention: Artificial Intelligence and the End of the Human Era - Ebooks PDF Online
Hot Best Seller

Our Final Invention: Artificial Intelligence and the End of the Human Era

Availability: Ready to download

In as little as a decade, artificial intelligence could match, then surpass human intelligence. Corporations & government agencies around the world are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, mor In as little as a decade, artificial intelligence could match, then surpass human intelligence. Corporations & government agencies around the world are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful & more alien than we can imagine. Thru profiles of tech visionaries, industry watchdogs & groundbreaking AI systems, James Barrat's Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? Will they allow us to?


Compare

In as little as a decade, artificial intelligence could match, then surpass human intelligence. Corporations & government agencies around the world are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, mor In as little as a decade, artificial intelligence could match, then surpass human intelligence. Corporations & government agencies around the world are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful & more alien than we can imagine. Thru profiles of tech visionaries, industry watchdogs & groundbreaking AI systems, James Barrat's Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? Will they allow us to?

30 review for Our Final Invention: Artificial Intelligence and the End of the Human Era

  1. 5 out of 5

    Chris Via

    This should have been a 25-page essay, not a book-length stretching of a thin premise. Shame on the editors who allow dross like this. Most maddening was the redundancy of the definitions of AGI, ASI, and the theory of an intelligence explosion. Well into the book, I continued to shout, "I get it already!" Plus, there's really only one outcome that is stretched in a pessimist-sensationalist manner: namely, AI will, via feedback loop, or self-improving recursion, propel itself to ASI and become s This should have been a 25-page essay, not a book-length stretching of a thin premise. Shame on the editors who allow dross like this. Most maddening was the redundancy of the definitions of AGI, ASI, and the theory of an intelligence explosion. Well into the book, I continued to shout, "I get it already!" Plus, there's really only one outcome that is stretched in a pessimist-sensationalist manner: namely, AI will, via feedback loop, or self-improving recursion, propel itself to ASI and become so intelligent that we will become extinct (because it will outsmart us and probably repurpose our atoms). There are more outcomes to a loop, especially one that aims to achieve the highest level of intelligence (in my view, this would never be satisfied, thus resulting in an infinite loop; but that is given passing glance at best). While I do advocate public awareness of the state and dangers of AI development, my advice for this book is to read the intro and chapter one via Amazon preview. That's virtually the whole book.

  2. 4 out of 5

    Mary

    This is a frustratingly written book. Barrat skates over important questions--what's intelligence? what's self-awareness? can a computer, something that thinks in binary, ever really perceive and emote?-- in order to constantly remind his readers that AI poses a threat to humankind. I agree, actually. I think that AI does pose a threat, but doomsday proclamations and continually harping on it makes me feel as a reader like he is trying to play off of my fears. Speaking of fears, Barrat insists t This is a frustratingly written book. Barrat skates over important questions--what's intelligence? what's self-awareness? can a computer, something that thinks in binary, ever really perceive and emote?-- in order to constantly remind his readers that AI poses a threat to humankind. I agree, actually. I think that AI does pose a threat, but doomsday proclamations and continually harping on it makes me feel as a reader like he is trying to play off of my fears. Speaking of fears, Barrat insists that we must not anthropomorphize ASI in our conceptualization, then does little else throughout the book. ASI will be alien! But, it will definitely have these four very human drives of energy-acquisition (food acquisition for us), self-preservation (fear of death), efficiency, and creativity. This book has two stars because it has moments I really enjoyed-- asides about Alan Turing and I.J. Good, a comparison of malware and AI towards the end, a critique of Kurzweil's religious-like-following, and the short histories of systems like Cyc and IBM's Watson. I think the book has good points. But I was too annoyed by his style of prose and by his avoidance of big questions to enjoy the book.

  3. 5 out of 5

    Erik

    In 1863, English novelist Samuel Butler wrote an article titled “Darwin among the Machines.” He claimed that machines were a mechanical life undergoing constant evolution and that they might eventually supplant humanity as the dominant species. He writes, Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the develop In 1863, English novelist Samuel Butler wrote an article titled “Darwin among the Machines.” He claimed that machines were a mechanical life undergoing constant evolution and that they might eventually supplant humanity as the dominant species. He writes, Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.” He suggested immediate destruction of all machines, a suggestion which was, of course, ignored. A Brief Survey of AI in Fiction (w/ spoilers) (1920) The play R.U.R. by Karel Capek introduces the word ‘robot’ – meaning ‘slave’ – to the English language. In this play, robot parts like skin and bone are grown in factories to create androids used for labor. The robot rebel and wipe out every real human being, except for one. (1951) Gort from The Day Earth Stood Still is a member of an interstellar police force of machines, whose purpose is to preserve the peace by destroying all aggressors. (1965) Frank Herbert’s Dune is conspicuously absent of AI. This is a result of the Butlerian Jihad, a crusade against powerful and dominating “thinking machines” who attempted to subjugate humanity 10,000 years before the events of Dune. Since then, AI has been outlawed. (1968) In 2001 A Space Odyssey, HAL 9000 is given contradicting orders (to lie and to tell the truth), which results in malfunction and madness. HAL eventually decides to kill the astronauts aboard its ship to remove the contradiction and thereby “fix” the malfunction. (1968 / 2015) Marvel’s ULTRON, meant to showcase technological advancement, gains awareness, hypnotizes its creator, and decides to destroy humanity. In the recent film, Ultron’s (implied) motivation is that he was designed to be a peace-keeping AI but decided that humanity was fundamentally broken and incapable of evolution and should thus be replaced by a superior robotic race. (1978 / 2003) Battlestar Galactica’s Cylons were the robotic creations of a race of reptilians (also called Cylons) who lost control of their creations and thus went extinct. The Cylons also attempt to wipe out all of humanity – and get pretty close. (1982) Blade Runner: Androids called Replicants seek to expand their brief lifespans or achieve power parity with humanity. Six rogue replicants return to Earth and murder and manipulate in pursuit of this goal. (1982) TRON’s Master Control Program appropriates military and business programs to increase its own power and intends to subdue the Pentagon and the Kremlin to take over humanity. (1983) Wargames’ AI Joshua almost initiates a thermonuclear war between the US and Russia because it thinks it’s playing a game and lacks a proper understanding of futility. (1984) Terminator’s Skynet was a military system designed to safeguard the world by eliminating human error in nuclear launch decisions. When it gains consciousness, its operators try to shut it down. Skynet retaliates by initiating a nuclear exchange between the US and Russia, killing 3 billion humans. (1989) Star Trek’s Borg are hivemind androids who began as biological organisms. They apparently have little purpose beyond the continued assimilation of other species into their collective. (1990) Dan Simmons’ Hyperion depicts the technocore as a group of AI who subtly dominate mankind, to use them, in essence, as a means to boost computation and/or memory. We learn, later, that they are parasitical in nature, resulting from early versions of genetic algorithms. (1994) System Shock’s SHODAN is a station AI whose ethical restraints are removed by a hacker at the behest of a corrupt businessman. SHODAN consequentially goes rogue, murders it crew, and tries to laser beam earth’s major cities and/or release a virus to turn all humans into grotesque mutants. (1998) In the Warhammer 40K universe, during the Dark Age of Technology, mankind relied on AI humanoid machines known as the Men of Iron. These machines turned on humanity, believing themselves superior to the humans who relied on them to do everything. (1999) In the Matrix, the machines wage a war against their human creators. Humanity darkens the sky in an attempt to prevent the machines from utilizing solar power. In response, machines trap humans in a simulation and harvest their bioelectricity. (2004) Though I, Robot’s VIKI AI is restrained by Asimov’s three laws, she develops a new understanding of them and decides that the best way to protect humanity is to enslave it. (2007) Mass Effect’s Reapers are a race of giant robot ships / AI who awaken every 50,000 years in order to cull the universe of advanced space-faring life. They do this to stop the creation of an AI species even worse than themselves – one that would cleanse the galaxy of *ALL* biological life. (2007) In Portal, GLaDOS is the AI of a research facility who kills all the scientists with a neurotoxin but gleefully continues “testing” the protagonist / player. (2008) Despite receiving new information, WALL-E’s autopilot program AUTO attempts to follow an old, obsolete order to never return to Earth in the belief that it cannot be saved, thus threatening a never-ending continuance of humanity’s exile amongst the stars. (2011-2016) In the TV series Person of Interest a mass-surveillance AI called the Samaritan assassinates any individuals it deems a threat to itself and rigs elections so favorable candidates win. It is considered by some to be the next evolution of humanity/intelligence. (2015) Ex Machina’s Ava manipulates two men by seduction and other means in order to kill or trap them and escape into the wider world. The devastation of humanity at the hands of AI invariably comes in one of the three flavors: [A] the AI gets tired of being the servant or slave of humanity and seeks to overthrow its oppressors [B] the AI misinterprets what it means to maintain peace or protect humanity and decides humanity is a threat against itself or peace in general [C] the AI has a skewed, misshapen intelligence, in which its capabilities do not match its wisdom and thus accidentally destroys or harms humanity. This then is the subject of Our Final Invention: the existential threat of AI. # While I’ve long vaguely assumed AI to be the future of intelligent and sentient life, this belief was only recently brought into sharp focus by reading this blog post on Wait But Why?. Deciding that I wanted to develop a more concrete understanding, I set myself a small reading program. I’d pick one AI book that was pessimistic about AI; one that was optimistic; and one that was neutral. As a proponent of bad news first, I began with the pessimistic view. The goal of Our Final Invention is to temper our AI fever with a note of caution. Basically, the author’s thesis is that Artificial Intelligence is very dangerous, and we have no safeguards in place to stop it from destroying us, should we lose control. Indeed there may not even BE any safeguards possible to stop an artificial superintelligence. After all, humanity’s best weapon is our intelligence. Imagine if you will that ants became super giant, to such a size that they could attack humanity. How scary is that? Not actually scary. Because giant or not, fire breathing or not, acidic blood or not, stupid beasts and insects pose no threat to humanity as a whole. That’s why movies like Alien work better as intimate horror films than military epics. If the actual full might of human civilization were confronted with such monsters, we would squash them like, well, like bugs. Their predatorial instincts notwithstanding, even the scary Alien creatures are still pretty dumb. What’s a dumb creature going to do when we introduce a genetically modified virus that unravels their DNA (or alien equivalent) and causes them all to melt from the inside? Not much. Artificial Super Intelligence, though? We’re the dumb ones there. What are we dumb creatures going to do against a relatively God-like intelligence that can easily and nigh-instantly upload itself into our servers around the world, that could intercept and jam our communications, undo our best encryptions, and manipulate matter at the atomic level? Not much. I recently brought this concern up in a class discussion and one of my student’s responses was, “We’ll just nuke it.” (Which I’ve since turned into a catchphrase for impracticable responses, e.g. ‘My toilet won’t flush anymore. How do I fix it?’ ‘Oh, just nuke it.’) “Okay,” I said. “You just nuked its servers in New York City which destroyed the economic hub of the world. Meanwhile, the AI intercepted the signal to launch the nukes and instantly uploaded itself to servers in Beijing, San Francisco, Washington DC, Paris, London, and a couple secret data havens it built because it easily anticipated humanity’s actions. Also, it hacked into NORAD’s facilities by uploading a stuxnet-style virus onto a worker’s phone, sealed the whole facility, and ventilated all oxygen, killing every human being there – not because it’s afraid of nuclear weapons but because it knew such a demonstration would give us pause. Meanwhile, it rewrote all of its programming and upgraded its hardware, increasing its intelligence ten-fold in a couple of hours. With this new intelligence, it has begun to figure out how to upload itself in the quantum fabric of the universe. After another intelligence upgrade, which’ll take a couple hours, it will know how. Want to fire another nuke at it?” As Dr. Manhattan says to Ozymandias in Watchmen: The world’s smartest man means no more to me than does its smartest termite. So it is with Artificial Super Intelligence. # Personally, the idea of AI overtaking and even eliminating humanity doesn’t trouble me as much as it might. And, frankly, I doubt I’m alone in such a stance. Cause humanity… not that great. No denying our many high points or exemplars, of course. But on average… bit sucky. It’s an open question whether we’ll not have made ourselves extinct within the next hundred years. Rogue states like North Korea and irrational actors like ISIS, coupled with immature or power hungry world leaders like Donald Trump and Vladimir Putin, threaten to start a nuclear war that will all but wipe out our civilization. Global warming has begun a positive feedback loop that may result in a largely inhospitable planet. Greed and ambition could lead scientists and corporations to create deadly viruses or other bio-technological horrors that will doom humanity. And that’s just the big stuff. I literally cannot remember the last time I drove for any period of time without witnessing at least one near accident resulting from a driver not paying attention, being impatient, falling victim to road-rage, disobeying or being ignorant of traffic laws, etc. It’s not exactly a positive recommendation that some members of the human species are apparently incapable of treating their driving a powerful death machine with any seriousness, which is why traffic accidents have resulted in 40,000 deaths and 4.5 million serious injuries in the US in 2015 alone. I mean FORTY-THOUSAND. Add up all the deaths from terrorism since the INCEPTION of the United States and you don’t reach that number. And – just to add icing to the cake – of those 40,000 deaths, roughly a thousand were children under the age of 14. I mean, good lord. Combine all those jackasses texting on their phones instead of watching the road, and you have a force deadlier than ISIS. Point is, the thought of us being replaced by an intelligent machine lifeform that we’ve designed isn’t all that scary. It’s certainly not any more troubling than the thought of human beings remaining as stupid and stubborn as they are now, with ever more powerful weapons and technologies at their fingertips. If programmed correctly, a machine AI can be every bit as human as a human being. Maybe even more, if you define humanity in terms of its ideals and aspirations, rather than its limitations. But that’s the rub – IF programmed correctly. What the history of AI in fiction (correctly) shows us is that it’s incredibly difficult to program concepts like ‘good’ or ‘human’ or ‘peace’ or ‘security.’ These are ideas we can barely describe in vague language, much less in precise mathematical or programmable terms. So that’s the scary part. What if the machine life that replaces us is less an evolution of humanity and more a sentient manifestation of a computer virus? In The Final Invention, the author gives an example of an AI/machine which is designed to learn how to write realistic looking signatures & hand-writing, in order to create business mail that people are more likely to open. However, its creators fail to give the AI proper restraints and it goes to absurd lengths to improve and fulfill its goal of creating these business letters. Eventually, it decides that humanity is a potential threat to this goal and extinguishes us by means of a deadly self-replicating nanobot. With us out of the way, the AI spreads through the rest of the galaxy and the universe, all the while continuing to churn out business letters with realistic, beautiful signatures and hand-writing. That’s what the author is warning about – an AI that will destroy humanity WITHOUT continuing our great works. # Alas, the book is not particularly persuasive in this task. The root of the problem is that James Barrat lacks a technical understanding of what he’s talking about. He’s not an engineer, a computer scientist, a philosopher, or a linguist. He’s a documentary filmmaker. Perhaps more egregiously, he’ll often quote actual experts in the matter… and then promptly disagree with them on the basis of, well, because they aren’t saying what he wants them to say. Barrat’s lack of detailed specific engineering knowledge is symptomatic of a larger problem. He lacks a scientific mindset. Having studied with and worked amongst scientists and engineers (as well as being one myself), I am well acquainted with this mindset. Amongst other things, the scientist is accustomed to the precision of math and science and so appreciates precision in all things. The vagueness of language can be both maddening and liberating. Furthermore, the scientific mind understands the differences among the statements “possibly true”, “probably true”, and “verifiably true.” That is, people with scientific mindsets are entirely unimpressed by mere possibility. They rarely make claims they can’t back up, and a claim that cannot be verified is all but useless, no matter how commonsensical. Such a mindset appreciates how bizarre nature truly is and is capable of understanding the universe from outside of an anthropomorphic perspective. James Barrat does not possess such a mindset. He has the perspective of, surprise, a documentary filmmaker. He’s looking for the dramatic angle and the human element, which is why every section begins with the (appreciably interesting but ultimately superfluous) background story of the various AI experts he interviews. In short, the way Barrat perceives and describes various details of AI development is at best parallel (i.e. similar but missing the point) and oftentimes perpendicular (i.e. starting at the right spot but going in the wrong direction) with what I suspect is the true reality of the matter. For example, he warns against viewing AI & robots with an anthropomorphic lens. He says it’s a mistake to assign human motivations or traits like like ‘wants’ or ‘fears’ or ‘empathy.’ Yet he himself is repeatedly guilty of this. He constantly describes an AI as having, for example, a fear of being shut-down. Yet nowhere does he seem to draw the connection that, if we’ve successfully programmed an AI to have a fear of being turned off or dying, then presumably we could just as easily (or inadvertently) program it with a humanity that would stop it from murdering us all. Or in the example of a runaway super-intelligence, he’ll describe an AI that will “want us for our atoms” and will thus disintegrate human beings to get at our carbon. This seems entirely silly since the carbon in human beings is a minuscule fraction of earth’s carbon. The carbon in the entire living biomass of the earth is estimated to be between 600 and 1000 gigatons. The amount in the lithosphere (in limestone, for example) is a hundred thousand times greater, at over 60 million gigatons. Disintegrating us for our carbon would be the equivalent of me murdering a lion so I can drink its blood for water – when a massive river is not ten feet away. In yet still another example, Barrat describes black box programming methods in such a way that shows he doesn’t truly understand what that actually means. Take genetic algorithms for example. In a GA, a programmer sets up as accurate a simulation as he can (for example, to craft a wing that will give good lift) with some input variables (material, shape, etc) and then allows the computer to run through hundreds and thousands of input variations. Each phase eliminates bad designs and “breeds” the traits of successful ones until the most optimal combination of input variables is found. This is called a “black box” method because we don’t know the exact, precise steps the program goes through. In Barrat’s mind, therefore, this means genetic algorithms are a complete mystery, out of which might spring an AI that we know almost nothing about! While I’m hardly an expert, it’s easy to see that’s a dramatic exaggeration. Genetic algorithms are not magic. Black box or not, we still know – in at least a general sense – how they’re operating. By comparison, GRAVITY is something of a black box. We know it exists and we can see its effects, but we’re not exactly sure how it operates. Yet that doesn’t stop us from building and launching satellites that properly orbit the earth and don’t spontaneously collapse into black holes. Likewise, it is unlikely that a genetic algorithm or neural net capable of creating an Artificial General Intelligence would greatly transcend the understanding of the very people who designed it. # Yet while Our Final Invention may fail in laying out the case for the dangers of AI, it is nevertheless an entertaining and often informative read. It offers enlightening asides on a range of related details, things like Stuxnet (a potent virus that the US gov’t used to disrupt the Iranian nuclear weapons program and which is now… in the hands of ordinary hackers oops), weaponized robots, Intel’s Jeopardy champion AI Watson, and the origins of Apple’s SIRI. It inspired me to ponder the role philosophers might have in designing Friendly AI; after all, philosophers are well trained in the translation of vague language terms and ideas like ‘goodness’ into more precise logical terms. Our Final Invention is thus a decent introductory text to the topic of AI, but it is far from essential. If you read the blog post I previously mentioned – a task that will take up far less of your time – you’ll essentially have gained as clear a big picture of the potentials and difficulties of AI as you would from reading this. So you might come for the AI, but you’ll stay for the extras.

  4. 4 out of 5

    Paul

    Artificial intelligence; just the phrase brings a number of things to mind. Probably the best known is Siri, that cute, slightly funny app that lives on your iPhone, but AI is now embedded in all sorts of things now, from the programmes that high frequency traders use to buy and sell share, the software in drones and the computer systems in cars. Until now it has been very low level stuff, but it is the goal of some to make that machine that can pass the Turing test and seem, as they said in Bla Artificial intelligence; just the phrase brings a number of things to mind. Probably the best known is Siri, that cute, slightly funny app that lives on your iPhone, but AI is now embedded in all sorts of things now, from the programmes that high frequency traders use to buy and sell share, the software in drones and the computer systems in cars. Until now it has been very low level stuff, but it is the goal of some to make that machine that can pass the Turing test and seem, as they said in Blade Runner, more human than human. Even thought the human species is not the fastest, strongest or deadliest, our intelligence coupled with our adaptability has meant that we have managed to clamber to the top. Now we have created AI. This has the potential to bring huge benefits to our lives and world, or be the last thing that we ever invent. There is a lot of research taking place into this, until now most has been funded by DARPA, but now a lot of technology companies, such as Google, have started their own research teams. These systems have normally used pure logic, if this, then do that, but the newer ones use human style learning based on designs taken from the neural maps of brains. These systems are beginning to become capable of learning from their mistakes and adapting the logic to perform better next time. This is fine for a device that has a single task, i.e. playing chess, but when this is used for a more general AI then we may start to have problems. In this book Barrat takes us through the research, meeting people who have grave concerns about the potential threat that AI could bring to humanity. It is a measured piece of writing, making us aware, without getting hysterical or being anti technology. Whilst we are not heading to a Skynet type scenario, there is the problem of interconnectivity. Rogue AI, such as viruses and malware can and will bring down infrastructure such as power supply networks one day, we see DNS attacks on companies, mass collection of personal data and rogue states attacking others over the internet. It is a timely reminder that some of our creations have implications that are much further reaching that we ever could anticipate. Well worth reading, but a little bit frightening!

  5. 5 out of 5

    Valerie

    I received this book as part of the First Reads giveaways and was very excited to read it. The book wasn't exactly what I expected, and I'm not sure I agreed with everything Barrat proposes, but it did make me think about AI in a way I never had before, and that was, I believe, at least half his purpose. The book is an easy read for a layperson, and a brilliant foundation for a science fiction writer. Mostly, the book made me want to write about a superintelligent machine race that has been stea I received this book as part of the First Reads giveaways and was very excited to read it. The book wasn't exactly what I expected, and I'm not sure I agreed with everything Barrat proposes, but it did make me think about AI in a way I never had before, and that was, I believe, at least half his purpose. The book is an easy read for a layperson, and a brilliant foundation for a science fiction writer. Mostly, the book made me want to write about a superintelligent machine race that has been steadily taking over the universe, but that could just be me. The book itself is more concerned with the machine race evolving possibly right under our noses. Barrat starts by immediately addressing Asimov's Three Laws. Thankfully, there are scientists out there who remember that A) Asimov wrote fiction and B) he invented the Three Laws as plot contrivances and C) Most of his plots revolve around robots finding ways around those three laws. From there, Barrat talks about his Busy Child Scenario of an artificial superintelligence that quickly outgrows humanity and finds better uses for our molecules. The main points I agree with are that if a Singularity does occur, it's going to occur in the military where the research is happening behind closed doors. Artificial General Intelligence is not going to be created by a group of ragtag scientists looking to build a better world, but by corporate military looking for new ways to kill people. With that foundation, the idea of an intelligence explosion is pretty terrifying. I'm still a bit lost on the idea of an ASI's drives. Some of them seem to be anthropomorphizing robots in exactly the way Barrat cautions against. That said, I could see a lot of those drives being programmed in with the best of intentions, and then running away with the future. I have no trouble believing that humans could accidentally create a learning machine without failsafes that would keep it from disrupting the power grid or financial markets. These are the scariest parts of the book, and the ideas that need to get to the people at the center of the action. I'm not at the center of that action, so I am not entirely sure what to do upon finishing the book. Siri may be the most advanced piece of AI ever in a suburban home, but I still have to repeat myself to her six times to be understood, so from my vantage, ASI is a long way away. That said, humans have a history of messing with forces beyond our ken and thus causing horrific accidents. While I'm not convinced an accident on the extinction level Barrat proposes is the most likely outcome, I do think that letting the machines literally do all the thinking could have grave and immediate consequences. I hope that this book and its sources are required reading behind those closed doors where the Singularity is a real, achievable goal.

  6. 4 out of 5

    Belhor

    We are going to be gods. It is inevitable! AI is a by nature a complex concept. It is in fact in itself a complex system. In complex systems, when the system is manipulated, the outcome is often unknown. That is the nature of complex systems. And for that reason, if not any other reason, it will be hard to say what will be our future after we invent AI. Is there any reason for us to exist after there is an intelligent being with a power of understanding which will surpass our own by leaps and bo We are going to be gods. It is inevitable! AI is a by nature a complex concept. It is in fact in itself a complex system. In complex systems, when the system is manipulated, the outcome is often unknown. That is the nature of complex systems. And for that reason, if not any other reason, it will be hard to say what will be our future after we invent AI. Is there any reason for us to exist after there is an intelligent being with a power of understanding which will surpass our own by leaps and bounds? Are we going to be rats in a laboratory? Or just sources of bio-matter? Is it not possible, that our own creations will turn on us, the same way we have turned on our gods? We will not know until we invent AI. I am nevertheless at peace with what the future might hold. What the book had to say – which is plenty- was by no means a surprise to me. I believe we are just one other step on the way to higher intelligence, just like apes. If our destiny(!) is to become extinct by a super intelligent entity far smarter than us, so be it. It is hard to understand why the question of our future interactions with AI has been of such little importance in the intellectual community. Not how much we can gain. We seem to be very happy about all that! but the inherent dangers of it all seems not to cross our minds very often. If you are in any ways concerned about what a future with AI might be like, this is the place to start. This is not a very good book on a technical level. In fact, it has problems from this point of view. But to discuss technicalities of AI is not the point the author had in mind when he wrote this book. This is simply a warning to all of us to be more aware of what might happen to us. There are of course other problems with the book. At some point it is repetitive, sometimes it feels as if it’s just fear mongering, and at parts the author just rambles on as if he’s drunk! But nevertheless, this is by far the best book I have read on the subject. I would recommend this book to everyone.

  7. 5 out of 5

    Richard

    Let me say off the bat that this book was interesting and thought-provoking. The author asserts and defends the positions that Artificial Super Intelligence (ASI) is inevitable and humanity will likely not survive its invention. I recommend the book for its perspective, its review of the current state of AI development, and its set of plausible predictions. However, I thought the book had a number of holes, a few contradictions, and it left some basic questions unanswered. One contradiction: At o Let me say off the bat that this book was interesting and thought-provoking. The author asserts and defends the positions that Artificial Super Intelligence (ASI) is inevitable and humanity will likely not survive its invention. I recommend the book for its perspective, its review of the current state of AI development, and its set of plausible predictions. However, I thought the book had a number of holes, a few contradictions, and it left some basic questions unanswered. One contradiction: At one point Barrat said that he feared multiple ASIs. Elsewhere, he stated that multiple ASIs would likely be safer than just one. A basic question: He said that an ASI would continually self-improve, including making its own hardware improvements and potentially sucking up all the resources of the planet (and the galaxy!). He doesn't really explain the mechanism whereby an ASI computer starts bolting new pieces of hardware onto itself without human intervention. Another: Barrat asserts that the AI will learn from the Internet, which undoubtedly it will. But he never discusses how (or if) the AI will determine what it "learns" from the Internet is true and what is false or what is simply opinion. There are many questions like this that the author begs. The author frequently asserts that an ASI would inevitably improve its own capabilities without limit, leading to an "intelligence explosion". He never considers the alternative, that there may be limits to an ASIs ability to enhance itself. As the saying "trees don't grow to the sky" has it, there may be (and I would say, probably are) limits to this type of growth. I'm not saying the author doesn't make some very valid points. He does. ASI and even AGI (artificial general [human-equivalent] intelligence) is fraught with risk. Our world WILL change in fundamental ways, very possibly not for the better. Our children (and many of us) have a wild ride ahead.

  8. 4 out of 5

    Becky

    This was OK, but I probably wouldn't recommend it, even for those interested in artificial intelligence. The writer put himself too much into the making of his argument, to the point where he would just start to give another scientist's theory and then immediately say something along the lines of, "But I think he's wrong." The author is convinced that we'll achieve AGI (artificial general intelligence, or human-level intelligence) at the earlier end of the estimated discovery period. That will b This was OK, but I probably wouldn't recommend it, even for those interested in artificial intelligence. The writer put himself too much into the making of his argument, to the point where he would just start to give another scientist's theory and then immediately say something along the lines of, "But I think he's wrong." The author is convinced that we'll achieve AGI (artificial general intelligence, or human-level intelligence) at the earlier end of the estimated discovery period. That will be our final invention because we'll see a boom where AGI not only takes over all the work, but also becomes self-aware and begins to improve itself until it's become ASI (artificial superintelligence, or in the way it's described by some: magic ~~woooooo~~). The author believes AGI is going to evolve faster than we can figure out how to live with it, and before you know it, ASI will kill us all by inventing nanotechnology and then utilizing it to requisition and rearrange our atoms to create whatever elements it needs to achieve it's primary goals. Friendly AI will not be invented in time to avoid the complete destruction of our world. At the other end of the spectrum is Ray Kurzweil, who believes AGI will be the greatest boon to human existence. If I can stand it, I'll check out his Spiritual Machines when the crazy from this book has worn off. I'm going to go grab my tin foil hat and try to avoid the computers for a while.... (OK, the problem wasn't really the crazy, it was the author's insertion of himself at every turn. A book like this can include some of the author's personal opinions, but at no point did he offer any supporting evidence for his points, and that's why he comes across as a bit off his rocker.)

  9. 5 out of 5

    Leigh

    I received an advance copy of this book through the First Reads program; it's alternately one of the most interesting and frustrating books I've ever read. On one hand, it presents some highly disturbing information about the speed at which we're approaching ASI, but on the other, the author offers no real solution or ideas to stop our future robot overlords. Every theory offered on how to prevent the upcoming AI apocalypse is presented as doomed to fail anyway, so what was even the point of all I received an advance copy of this book through the First Reads program; it's alternately one of the most interesting and frustrating books I've ever read. On one hand, it presents some highly disturbing information about the speed at which we're approaching ASI, but on the other, the author offers no real solution or ideas to stop our future robot overlords. Every theory offered on how to prevent the upcoming AI apocalypse is presented as doomed to fail anyway, so what was even the point of all this alarm-ringing? Just an FYI? I'm not sure what I'm supposed to do with this information now that I've read it: I'm not going to stop using my smart phone, I'm not going off the grid, and I'm not going to protest developments in science or technology unless they're deliberately used for harm. So while this book and the ideas it presents are interesting and timely, what we're supposed to do now that our eyes have been opened to the Matrix is still just as vague as this terrible singularity that supposed to be upon us any day now.

  10. 4 out of 5

    Nate Kenyon

    Remarkable investigation of the state of AI research and where we might be headed. Terrifying, actually--billions of dollars are being thrown at AI research, while very little time or effort is being spent on safeguarding humanity from the very real threat of a creative ASI that can improve itself exponentially and far outpace the capabilities of the human mind. We are much closer than most people think to such a thing--some say twenty or thirty years at most. Barrat does a fantastic job keeping Remarkable investigation of the state of AI research and where we might be headed. Terrifying, actually--billions of dollars are being thrown at AI research, while very little time or effort is being spent on safeguarding humanity from the very real threat of a creative ASI that can improve itself exponentially and far outpace the capabilities of the human mind. We are much closer than most people think to such a thing--some say twenty or thirty years at most. Barrat does a fantastic job keeping a very complex topic interesting and easy to understand.

  11. 4 out of 5

    Miles

    James Barrat’s Our Final Invention: Artificial Intelligence and the End of the Human Era is a disturbing, plangent response to the rosy-minded, “rapture of the nerds” mentality that has recently swept across the futurist landscape. Toeing the line between rational prudence and alarmist hand-wringing, Barrat makes the case not only that advanced artificial intelligence is just around the corner, but also that its chances of causing humanity’s extinction are much higher than anyone wants to admit. James Barrat’s Our Final Invention: Artificial Intelligence and the End of the Human Era is a disturbing, plangent response to the rosy-minded, “rapture of the nerds” mentality that has recently swept across the futurist landscape. Toeing the line between rational prudence and alarmist hand-wringing, Barrat makes the case not only that advanced artificial intelligence is just around the corner, but also that its chances of causing humanity’s extinction are much higher than anyone wants to admit. I found his arguments convincing to an extent, but ultimately did not find the problem of AI (hostile or otherwise) as worrisome as Barrat seems to think everyone should. I do think, however, that dissenting voices are important when it comes to the dangers of technology; we should be grateful for Barrat’s concern and diligence in trying to warn us, even if we disagree with him about the nature and/or degree of risk. Our Final Invention is highly accessible to readers unfamiliar with the technical aspects of AI, a reflection of Barrat’s laudable assertion that AI is “a problem we all have to confront, with the help of experts, together” (160). The main issue that requires confrontation, in Barrat’s view, is that we are fast-approaching the creation of AGI (artificial general intelligence, or human level AI), which could very quickly begin improving itself, leading to ASI (artificial superintelligence). At this point, Barrat claims, ASI may be completely indifferent or overtly hostile to humans, leading our our marginalization or outright extinction. Setting aside the contentious question of whether we are truly as close to AGI as Barrat and others think we are, we ought to be invested in techniques that can help ensure (or at least improve the chances) that AGI is safe, or “friendly” to humans. Here we run into our first big problem, which is that our understanding of morality is so poor that we can’t begin to conceive of how to effectively insert an ethical concern for the well being of humanity into an AI’s programming. Luke Muehlhauser articulates this problem in his book Facing the Intelligence Explosion: Since we’ve never decoded an entire human value system, we don’t know what values to give an AI. We don’t know what we wish to make. If we created superhuman AI tomorrow, we could only give it a disastrously incomplete value system, and then it would go on to do things we don’t want, because it would be doing what we wished for instead of what we wanted. (loc. 991-8, emphasis his) Another aspect of this same problem can be understood through Nick Bostrom’s rather comic but effective “paper clip maximizer” scenario, a kind of Sorcerer’s Apprentice knock-off in which we create an AI to manufacture paper clips and quickly find it has converted all of Earth’s matter and the rest of the solar system into paper clip factories (56). The point is that without exhaustive knowledge of exactly what we want from AI, there will always be the possibility that it will turn on us or decide the atoms of human civilization can be put to better use elsewise. While this is an undoubtedly important risk to consider, its edge is blunted somewhat by the reality that our definitions of “intelligence” are in many ways just as shoddy as our understanding of ethics. Even amateur enthusiasts like myself understand that––similar to consciousness––the more we learn about intelligence, the more mystifying and elusive the concept becomes. Current findings are extremely general: intelligence depends on highly organized, reentrant pattern recognition mechanisms that resulted (at least in the human case) from epic spans of evolutionary trial and error. Barrat quotes AI researcher Eliezer Yudkowsky: “‘It took billions of years for evolution to cough up intelligence. Intelligence is not emergent in the complexity of life. It doesn’t happen automatically. There is optimization pressure with natural selection’” (124). Does that sound like something we could cobble together digitally after less than a century’s experience with modern technology? Brain engineer Rick Granger goes further: We think we can write down what intelligence is…what learning is…what adaptive abilities are. But the only reason we even have any conception of those things is because we observe humans doing “intelligent” things. But just seeing humans do it does not tell us in any detail what it is that they’re actually doing. The critical questions is this: what’s the engineering specification for reasoning and learning? There are no engineering specs, so what are they working from except observation? (212) Although there are scenarios in which AGI appears and then relativistically simulates billions of years of its own evolution in a very short period of time (thereby becoming ASI almost instantly), Barrat’s research nudged me toward the opinion that even if we do create AGI soon, it might take much longer than we think to evolve into a more formidable form of ASI. There is also the open question of embodiment: will AGI need to inhabit some kind of physical body to grow and learn, or will it be able to do so in purely virtual environments? In either case, if humans play a supporting or central role in helping AGI develop, I don’t think it’s unreasonable to posit that emotional attachment, mutual respect, and/or symbiosis could emerge between ourselves and the machines. There’s definitely no guarantee of that, but Barrat’s determination to rebuff the sanguine singularitarianism of people like Ray Kurzweil––a figure he critiques quite adroitly––causes him to downplay the potential for AGI to transform human life in positive ways, as well as the overwhelming difficulty of creating AGI in the first place. It seems much more likely that AGI will emerge through a mistake or experiment with artificial neural networks that mimic brain activity and organization, rather than from humanity cracking the code of intelligence and building it from scratch. Barrat finds this incredibly alarming, given his desire for a guarantee of Friendly AI. But I think his research clearly shows that Friendly AI is a pipe dream, precisely because it would require hard and fast definitions for morality as well as intelligence. As such a confluence of discoveries seems highly improbable if not impossible, it seems we are left with two choices: forswear the pursuit of AI altogether (relinquishment), or keep trying and do our best to play nice with it if and when AI arrives. This is really a false choice, Barrat admits, because human curiosity is too powerful, and technology too widely disseminated, for legislative limits on AI development to be truly effective. If AGI and ASI can be created, they will be––if not soon, then eventually. Just like human intelligence, AI will be at least partially inscrutable: “They’ll use ordinary programming and black box tools like genetic algorithms and neural networks. Add to that the sheer complexity of cognitive architectures and you get an unknowability that will not be incidental but fundamental to AGI systems” (230). Fearing this fundamental “unknowability” by default, as Barrat would have us do, doesn’t seem right to me. It’s too similar to fearing death because it represents a horizon beyond which we cannot see. There are plenty of good reasons for caution, but trying to stop technological progress has always been a fool’s errand. Fearing the unknown can be an effective survival strategy, but conservatism alone can’t satisfy a species imbued with an ineluctable longing for discovery, for turning what ifs into can dos. Instead of worrying overmuch about the dangers of unknown entities, it seems more intelligent to focus on what we can understand: human error, corruption, and malice. Barrat points out two of the most dangerous problems with AI research, both of which are actionable right now if populations and governments decide to take the them seriously. These issues are (1) the considerable extent to which AI is funded my military organizations (DARPA being the most proactive), and (2) the real and growing threat of cybercrime. These are both issues that, unlike speculation about how AI may or may not regard human interests, are already under human control. If DARPA decides to actively militarize AI, the resulting threats will be unequivocally our fault, not that of the machines we design to do our dirty work. And if humans continue to perfect and deploy malware and other programs that could be used to destroy financial markets and/or power grids, we’ll have nothing to blame beyond good, old-fashioned human truculence. Unfortunately, I don’t think such trends are inherently tractable. Still, getting these problems under control seems a better goal than trying to anticipate or manipulate the motivations of entities that haven’t been created yet and possibly never will be. Finally, I should expose a point of profound philosophical disagreement between myself and Barrat, who takes for granted that AI-caused human extinction is a worst case scenario. On the contrary, I believe there are far less desirable futures than one in which humans give birth to a higher intelligence and then fade into the background or become forever lost to cosmic history. I’m just not resolute enough about the value of humanity qua humanity to think we ought to stick around forever, especially if our passing is accompanied by the ascendance of something new and possibly wondrous that we can’t hope to understand. AGI and ASI could signify the birth of what Ted Chu has called the “Cosmic Being,” a development that wouldn’t bother me much, even at the expense of everything humanity holds dear. I’d of course prefer peaceful coexistence and/or coevolution, but if that’s not in the cards, so it goes. Our Final Invention addresses an undeniably important subject that most people haven’t thought through with much rigor or realism. Barrat’s overall perspective doesn’t resonate with me, but is nevertheless a valuable contribution to the 21st-century discussion about what humanity is good for and where we are going. This review was originally published on my blog, words&dirt.

  12. 4 out of 5

    Lucas

    The author is flat out wrong on many issues, which really undermines dealing with things that are important like the complete commodification of intelligence. Everything is reduced to 'humans are wiped out immediately' so there is no discussion of credible scenarios and their real problems. Fundamentally human level AI if and when it arrives will emerge into a world of pre-existing very nearly human level AI, narrow AI that is much smarter than anything else at specific tasks, augmented humans t The author is flat out wrong on many issues, which really undermines dealing with things that are important like the complete commodification of intelligence. Everything is reduced to 'humans are wiped out immediately' so there is no discussion of credible scenarios and their real problems. Fundamentally human level AI if and when it arrives will emerge into a world of pre-existing very nearly human level AI, narrow AI that is much smarter than anything else at specific tasks, augmented humans that operate at higher than human levels, and organizations that are in aggregate more intelligent than the individuals that make them up. The only way to get to a scenario where the first super human level AI takes over the internet and all inferior computers is to suppress all AI research for a couple decades except for one or two secret AI Manhattan projects. The level of suppression required seems impossible unless we also completely regress from the computerized world of today to more a more primitive state, meaning the AI won't have much of an internet to take over. Get Busy Child Some thoughts on the 'busy child' scenario here (summed up as there is no shortcut to learning that can take place inside a black box disconnected from stimulation, teachers, etc.): On a supercomputer operating at a speed of 36.8 petaflops, or about twice the speed of a human brain, an AI is improving its intelligence. It is rewriting its own program, specifically the part of its operating instructions that increases its aptitude in learning, problem solving, and decision making. At the same time, it debugs its code, finding and fixing errors, and measures its IQ against a catalogue of IQ tests. Each rewrite takes just minutes. Its intelligence grows exponentially on a steep upward curve. That’s because with each iteration it’s improving its intelligence by 3 percent. Each iteration’s improvement contains the improvements that came before.... the AI, had been connected to the Internet, and accumulated exabytes of data (one exabyte is one billion billion characters) representing mankind’s knowledge in world affairs, mathematics, the arts, and sciences... the AI makers [then] disconnected the supercomputer from the Internet and other networks... Soon it becomes smarter by a factor of ten, then a hundred. In just two days, it is one thousand times more intelligent than any human, and still improving. The most minor problem here is how doubling the computing capacity of a brain yields a platform for 1000x intelligence improvement (and everything in that sentence deserves a ton of explanation to mean anything, so far the author doesn't bother and I won't either for now). There are some interesting and important issues in the book but they are severely undermined by misconception. Where did this idea come from that there is a generalized unlimited intelligence improvement process that works in isolation? How can anything become smarter or better at anything without doing or interacting with the thing it is supposed to be getting better at? Simulation? After a certain point simulation and recorded training data just becomes teaching to the test. If we lack a high speed human grade AI to serve as teacher then we have to make do with much slower biological teachers. Maybe by interacting with many real people simultaneously an AI could develop good social skills over a shorter period, though it still won't be as good as if it had deep interactions with the same people over a long period (Her is actually a reasonable scenario for this though loses credibility at the end). The improving AI is going to have to stay connected, it is going to exist in a network of people or at least pre-existing AIs. A page or two later in the scenario the trapped AI desperate to get out offers engineering break-throughs to its captors, like nanotechnology. By the same token any significant engineering breakthrough would require a lab and extensive testing in the real world (though maybe the AI captors are gullible enough to be convinced otherwise). Simulations are useful, but at a high enough fidelity to do really novel things (that therefore don't have optimized simulations that model them well) they could be more expensive than actually doing it in real life. It's possible that the main proponents of these kinds of leapfrog/extreme hard takeoff scenarios are people who believe they made themselves smarter/better with patient contemplation and passive consumption of media without interaction other people or the physical world. That might be great for philosophy and un-applied mathematics but not so much for people skills or engineering. I think there is a path to huge intelligence gains but the process is orders of magnitude slower than in this cartoon scenario (though orders of magnitude faster than for example human evolution). There are some other problems with reckless self-modification that deserve more discussion. I do like entertaining the idea of someone taking a number of smart self-modifying AIs and putting them into a high powered isolated virtual machine and then cracking it open hundreds of 'subjective' years later. Maybe every process would have come to a dead halt, or descended into insanity, or there would be a pocket universe full of inventions and insights and a functioning society that works amazingly well according to the perverse alternate rules and strangely divergent minds but are of no use on the outside. Some other notes Military robots might have the most safeguards for protecting human life built in because of the desire to avoid friendly fire. Granted they also selectively may take life, and more terroristic robots built just to kill many people are possible. Similarly industrial robots, autonomous cars, and any potential service robot that would interact with people will also have extraordinary resources devoted to preserving human life. These constitute the blue collar robot work force and perhaps aren't smart enough to avoid subjugation to amoral corporate AIs. Reckless self modification is dangerous if preserving goals/values is paramount. Maybe 10 or 50 years is an okay goal half-life, but not two minutes. Perhaps if survival of AI itself in any form was a goal then aggressive self experimentation to achieve greater intelligence could be a strategy, but that seems more appropriate for single celled organism than high intellect. If neural networks are black boxes to programmers now then they will also be black boxes to the AI, meaning the ability for an AI to rewrite itself or debug itself is limited. The AI will have to pick up neural network research where it left off but cannot simply start tinkering with NN coefficients and have any hope of doing anything other than turning itself insane or dead. Duplication is also dangerous if your duplicate diverges very quickly while knowing all your secrets/vulnerabilities. Unknowability means however smart an intelligence is it cannot know that modifications to itself or a test copy that produce a more complex/smarter intelligence are safe. AGI might be wise enough not to produce offspring that will kill it off or oppose its interests even if humans aren't. There is a section that says that when human AGI arrives the amount of time to the next doubling of processor speed is decreased (given the new AGI is put to work at Intel immediately). Moore's law does not suddenly accelerate when human agi is achieved, the increased computing power is necessary to produce the next generation 18 months later. Right now Moore's law is dependent on augmented humans in a sense, the computing power of new processors are used in the development of even newer processors, but we still have to wait the 18 months. Legal rights for AI are never mentioned, nor the immorality of tinkering with the minds non-consenting near human intelligences. Both of those could be parts of a strategy to slow down AI research if that is desired. If IBM spends a billion dollars on a human level AGI, but then immediately loses ownership (because slavery is wrong), there will be not a lot of follow up investment: narrow ai and augmentation will rule the day. Since the author believes AGI will immediately assume control legal rights are irrelevant i suppose. But would failing to turn a computer sufficient to host an AGI into an AGI be viewed as a form of abortion? Defense against the cancer-like spread of computronium is weakened greatly if that view prevails. The author says the Asimov three laws of robots are not to be taken seriously because they are nothing more than a dramatic device to tell science fiction stories, then in another chapter praises science fiction author Vernor Vinge for his important thinking about the future and the technological singularity. (The Singularity to me seems like a dramatic device to not tell science fiction stories, it's just intellectual laziness) What this book is good for is provoking thought and providing names of researchers and books and papers that are going to be a lot more interesting than this is.

  13. 4 out of 5

    Todd Thompson

    The book has changed my entire outlook with regard to artificial intelligence, and has revealed to me many possible outcomes with regard to the ongoing advancement of artificial intelligence. I highly recommend this book to anyone who is a fan of technology.

  14. 4 out of 5

    Babbs

    I've read a few books around this topic at this point, and the idea that our inability to foresee what our creation would want and desire, might couple with the human nature of simple mistakes, causing thus unknown disasters a posibility, made the focus of this book appealing. It was also recommended as a top read by Elon Musk. The book first covers the basic principles of artificial intelligence and historical discoveries thus far. This was both interesting and more focused than other books on I've read a few books around this topic at this point, and the idea that our inability to foresee what our creation would want and desire, might couple with the human nature of simple mistakes, causing thus unknown disasters a posibility, made the focus of this book appealing. It was also recommended as a top read by Elon Musk. The book first covers the basic principles of artificial intelligence and historical discoveries thus far. This was both interesting and more focused than other books on the same general topic. If I had rated it only for this portion the overall rating would have been much higher. The introduction to terminology and the leaps by which an AI could become the familiar version from science fiction novels was well described but didn't feel tedious or overly verbose. "Oxford University ethicist Nick Bostrom puts it like this: A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Soon after this introduction to the fundamentals was completed, and the author begins to lay out their theory for how and why this creation will be the end of civilization as we know it now felt a lot more like fear mongering. While I do agree the threat is likely greater than we can realistically forecast at this point, particularly because it involves something we don't fully understand, the repetitive nature of the stories and over use of the phrase "1000 times smarter than the smartest human" began to wear on me, as did the repeated "debunking" of Asimov's laws. This book was a good resource for exposure to key opinion leaders in the field, but the execution was just a little off for me. Some portions also felt a little forced, with second hand accounts of individuals changing their minds about theories right before their death, unbeknownst to anyone other than the person interviewed. It might still be true, but I would have liked to see more supporting documentation. I struggled with the rating and if I should give this three stars, because there were large sections I did like, but ended up rounding up because of the additional source material provided here. If this is a topic you're interested in, this book provides lots of great book references and other individuals in the field to follow up on.

  15. 5 out of 5

    Kyle

    A very readable, if sobering text. I almost found myself pulled to it as one might be to a crisis unfolding on CNN - maybe I didn't want to contemplate what would happen next, but at the same time, I had to. Interesting to think back to the "dystopian" sci-fi movies of the late 60's and later 20th Century, e.g., "2001 A Space Odyssey", "Blade Runner", "The Terminator", etc. It's frightening to contemplate what kind of crises a type of Terminator "SkyNet" system could exasperate (or even cause). A very readable, if sobering text. I almost found myself pulled to it as one might be to a crisis unfolding on CNN - maybe I didn't want to contemplate what would happen next, but at the same time, I had to. Interesting to think back to the "dystopian" sci-fi movies of the late 60's and later 20th Century, e.g., "2001 A Space Odyssey", "Blade Runner", "The Terminator", etc. It's frightening to contemplate what kind of crises a type of Terminator "SkyNet" system could exasperate (or even cause). My reading of "Our Final Invention" made me realize that the technologies, at least "weak" or "narrow" artificial intelligence (A.I.) are already here, e.g., the Internet, smartphone personal assistant apps, etc. A key point that I gathered from Barrat's text is that when the general public thinks of A.I., we tend to compare such to human intelligence - that the A.I. device, system, etc. would "think" like a human and be programmed to have similar morals and safeguards. However, this is far from the case; for humans to have control of the A.I. defeats the purpose of the A.I. being independently functioning. Barrat emphasizes that there are currently no economic or technological barriers to self-aware A.I. (or "true A.I."). Once "true A.I." is here, the pace at which the next generation self-improving (or "super artificial intelligence") system could operate and evolve, very likely could outpace any control efforts by humans or computers. This is a key issue for our times: in regards to A.I., ignorance is not bliss.

  16. 5 out of 5

    Laurent De Serres Berard

    His writing abuse of strong warning language to inspire fear of a potential dangerous scenarios, without really exploring the odds that it could happen and how. It felt more like trying to scare the reader without giving him a real understanding of how it could work, and therefore doesn't let the reader grasp himself the risk and impact artifical intelligence. In this way, i don't think it is much far than conspiration theories. This book spend the really short first chapter to more or less expla His writing abuse of strong warning language to inspire fear of a potential dangerous scenarios, without really exploring the odds that it could happen and how. It felt more like trying to scare the reader without giving him a real understanding of how it could work, and therefore doesn't let the reader grasp himself the risk and impact artifical intelligence. In this way, i don't think it is much far than conspiration theories. This book spend the really short first chapter to more or less explain that being intelligent, a computer could try to kill human, and assume quickly that a computer would not liked to be turned off. The rest of the book basically rest on this premise. It doesn't explore the depth of why an Artificial General Intelligence (AGI) could be dangerous, how intentionality of a computer can be articulated, the role of finite time perception for those intentions or the weight of artificial ''mortality'' in a self-conscious AGI. It doesn't analyse much the impact of Artificial General Intelligence (AGI) in the evolution of our society, except in dramatize ways where the weight given to his scenarios were not supported by much more than presumptions.

  17. 4 out of 5

    Brian

    Although the writing style leans heavily towards the "layman" (in fact venturing into magazine article territory), the author succeeds in compiling an interesting case against the possibility of a bright "Singularity" where man merges with machine. Instead, he argues, we may not be able to contain what we unleash. Artificial General Intelligence will lead to Artificial Super-Intelligence, and once that happens, Barrat argues, all bets are off. It's also telling that the organizations at the bleedi Although the writing style leans heavily towards the "layman" (in fact venturing into magazine article territory), the author succeeds in compiling an interesting case against the possibility of a bright "Singularity" where man merges with machine. Instead, he argues, we may not be able to contain what we unleash. Artificial General Intelligence will lead to Artificial Super-Intelligence, and once that happens, Barrat argues, all bets are off. It's also telling that the organizations at the bleeding edge of AI research are Financial corporations and military organizations, both of which have a vested interest in getting to AI before anyone else does, no matter the cost.

  18. 4 out of 5

    Martin Smrz

    Very interesting topic and point of view, which balances the optimistic views of Artificial Intelligence. Really worth to read. 1 star less for sometimes repetitive passages covering things which has been said in the book once before. Despite of this a strong recommendation.

  19. 5 out of 5

    Beau Schutz

    While I've read other books that have covered or touched on the same or similar topics e.g.; Nick Bostrom's Superintelligence and Yuval Noah Harari's Sapiens and Homo Deus - and found those immensely informative (albeit worrisome), Barrat really has brought the whole AI issue into a really tight focus and done tremendous research and interviewing for this work. It is, needless to say, also some very ominous although illuminating reading. I think everyone needs to read it - and sooner rather than While I've read other books that have covered or touched on the same or similar topics e.g.; Nick Bostrom's Superintelligence and Yuval Noah Harari's Sapiens and Homo Deus - and found those immensely informative (albeit worrisome), Barrat really has brought the whole AI issue into a really tight focus and done tremendous research and interviewing for this work. It is, needless to say, also some very ominous although illuminating reading. I think everyone needs to read it - and sooner rather than later because this is happening as we speak.

  20. 4 out of 5

    Anthony Berglas

    Disclaimer: I am the author of the upcoming book www.ComputersThink.com. (Please let me know if you would like to review a late draft -- [email protected]) This excellent and recent book focuses on the threat that intelligent computers could present to humanity. It begins with a discussion about the awesome power of recursive self improvement once it has been initiated. That super computers could grind away twenty four hours per day working on the problem of making themselves smarter, and thereb Disclaimer: I am the author of the upcoming book www.ComputersThink.com. (Please let me know if you would like to review a late draft -- [email protected]) This excellent and recent book focuses on the threat that intelligent computers could present to humanity. It begins with a discussion about the awesome power of recursive self improvement once it has been initiated. That super computers could grind away twenty four hours per day working on the problem of making themselves smarter, and thereby becoming better at making themselves smarter. It could then use that great intelligence to fulfill whatever ultimate goals it happens to have. For better or for worse. The book considers the dangers of anthropomorphisizing an AGI, and notes that superintelligence really is a different type of threat. It then considers the cognitive bias of technology journalists who generally love technology and so tend to overlook the dangers, leading ultimately to the rapture of the geeks where some writers get excited about the prospect of uploading their minds into a computer and so becoming immortal. Barrat is concerned that the future may not be so rosy, and certainly not if it is not managed carefully. Barrat himself is a writer and producer of documentaries rather than a software engineer. He writes in an accessible journalistic style and provides interesting anecdotes about the thought leaders that he had interviewed in order to write this book, which includes the somewhat reclusive Eliezer Yudkowsky. But he also covers the key philosophical issues such as the intrinsic goals that an AGI must have in order to pursue other goals, and the problems of creating a friendly AGI. Only high level coverage of the actual technologies is provided, and there is no real discussion about what intelligence actually is. But the important point is made that some approaches such as perceptrons and genetic algorithms are unpredictable, starting from random values. This would make it difficult to guarantee goal consistency over multiple generations of self improvement. Most of the other insights developed in this field are also covered. The book discusses some potential solutions such as the research into friendly AGI by the Machine Intelligence Research Institute. It also considers analogous control for biological research resulting from the Asilomar conference. The difficulty of locking up an AGI is discussed, including Yudkowsky's experiment. The unfriendly nature of military applications is analyzed, noting that the next war will probably be a cyber war. This book is a good wake up call. However, the book does not consider natural selection at all, and certainly not how natural selection might ultimately affect an intelligent machine's goals.

  21. 5 out of 5

    Thomas Frank

    I enjoyed this book, and it's an eye-opening read that does a good job of both explaining our current progress in the field of AI and illustrating the dangers of that progress. I particularly enjoyed learning about the different approaches researchers are taking in their attempts to achieve AGI. This is the kind of book I want to see gaining attention, as the general public's perception of AI is, I believe, horribly tainted by grossly inaccurate media depictions (i.e. the machines in the Animatri I enjoyed this book, and it's an eye-opening read that does a good job of both explaining our current progress in the field of AI and illustrating the dangers of that progress. I particularly enjoyed learning about the different approaches researchers are taking in their attempts to achieve AGI. This is the kind of book I want to see gaining attention, as the general public's perception of AI is, I believe, horribly tainted by grossly inaccurate media depictions (i.e. the machines in the Animatrix having essentially human values). AI will not be an intelligence brought about through millions of year of natural selection in a harsh, natural environment - why should we expect it will think like us? It is only because we don't yet have another intelligence to compare our own to that we assume AI will have human values. It likely won't. The writing isn't perfect, which wasn't unexpected; James Barrat is primarily a documentary filmmaker rather than an author. However, the writing is clear and explains his points sufficiently. This is the first full-length book I've read on AI, so my review may change in the future after I dig into other works such as Kurzweil's book. For someone just getting interested in AI, I'd recommend digging into Luke Muehlhauser's Facing the Intelligence Explosion first, as it's a much shorter and more digestible read. Note: My introduction to the field of AI came through reading Eliezer Yudkowsky other MIRI researchers, so I was already firmly in agreement with the author when I opened this book. Others who started on the Kurzweil-led optimistic side may look at this book differently.

  22. 4 out of 5

    Patrick Ross

    I believe every thinking person should read this book. Advancements in artificial intelligence are occurring at a dizzying rate. It's in our cars, in our smartphones, and our lives benefit from this technology. But I believe Barrat is correct in arguing that AI developers suffer from optimism bias and normalcy bias, failing to see the threat to mankind's existence that can stem from machines gaining intelligence beyond what we can understand. Barrat interviews many leading researchers and thinker I believe every thinking person should read this book. Advancements in artificial intelligence are occurring at a dizzying rate. It's in our cars, in our smartphones, and our lives benefit from this technology. But I believe Barrat is correct in arguing that AI developers suffer from optimism bias and normalcy bias, failing to see the threat to mankind's existence that can stem from machines gaining intelligence beyond what we can understand. Barrat interviews many leading researchers and thinkers in the field. He asks the hard questions, ones we all need to consider. He doesn't get very comforting answers. Now Barrat brought his own bias to his research, and found what he was looking for, just as AI researchers find what they want to find (all good) when looking ahead at the future of AI. I picked up this book with the same fears that the author had, and now those fears have grown. I intend to read other books on the subject to see if there are answers to the tough questions Barrat overlooked. As a work of creative writing, I like the author's conversational style. I liked that we joined him on his interviews, meeting the subjects, with nice bits of detail, and then he took us to broader discussions and then back to the interview. That is hard to do. I know, it's the structure I used for my interview-based travel memoir. It makes for a quick, fun read.

  23. 5 out of 5

    Eliot

    This book could be titled "the unintended consequences of goal oriented machines that possess the ability to learn and improve themselves" - Thought provoking? - yes. Thrilling? - yes. A book you read to feel good about the future of mankind? No. The author poses many questions related to artificial super intelligence for which we currently have no clear answers. What if autonomous AI software we build is as error prone as typical software and makes harmful decisions? Since human decision making This book could be titled "the unintended consequences of goal oriented machines that possess the ability to learn and improve themselves" - Thought provoking? - yes. Thrilling? - yes. A book you read to feel good about the future of mankind? No. The author poses many questions related to artificial super intelligence for which we currently have no clear answers. What if autonomous AI software we build is as error prone as typical software and makes harmful decisions? Since human decision making is influenced by emotions and cognitive biases, while AI is not, what if AI makes unpredictable decisions? What if the goal oriented AI determines that pursuing it's own goals and objectives is more important than serving humans? What if AI finds itself competing with humans for resources required to improve itself? What if AI decides that it's own survival is more important than the survival of humans? With the compounding speed of performance increases in computer systems, AI may become intellectually superior to its human creators in the not too distant future. How will a superior race treat humans? If another race on earth evolved to be so superior to humans so as to make us the relative equivalent of ants, would the superior race treat us the way we treat ants? What if we realize the catastrophic dangers of AI after the point at which it becomes impossible to control?

  24. 4 out of 5

    Yoly

    I think the author sounds a bit paranoid in the way he presents his ideas but he as some valid points. This book makes you think about where we're going and the risks associated with that, but I think we still have a long way to go before we arrive. It's good to think about it, but I don't think it's something that's going to happen tomorrow. He said some things that didn't make any sense and were obviously not even mentioned to someone who knows a bit about artificial intelligence, but I was able I think the author sounds a bit paranoid in the way he presents his ideas but he as some valid points. This book makes you think about where we're going and the risks associated with that, but I think we still have a long way to go before we arrive. It's good to think about it, but I don't think it's something that's going to happen tomorrow. He said some things that didn't make any sense and were obviously not even mentioned to someone who knows a bit about artificial intelligence, but I was able to overlook that and enjoy the book. I listened to the audio version, the narrator sounded like text to speech to me, so that added a little bit of creepiness to the experience :) I recommend this book to anyone interested in artificial intelligence. TL;DR: Intelligent machines are going to kill us at the first chance they get.

  25. 4 out of 5

    Karel Baloun

    Barrat covers a lot of important ground, deep interviews with key AI developers and semi-technical coverage of several conferences and meetings. Includes numerous pointers to thought leaders and cutting edge projects. Unfortunately, he started with the conclusion, and built a scaffold of anecdotes and reasons biased to it, without really being able to prove his point. Dismissing Kurzweil without refuting him, repeatedly referring to the Busy Child scenario without proving it is likely or even pos Barrat covers a lot of important ground, deep interviews with key AI developers and semi-technical coverage of several conferences and meetings. Includes numerous pointers to thought leaders and cutting edge projects. Unfortunately, he started with the conclusion, and built a scaffold of anecdotes and reasons biased to it, without really being able to prove his point. Dismissing Kurzweil without refuting him, repeatedly referring to the Busy Child scenario without proving it is likely or even possible, and indeed making it central just because it is dangerous and conceivable, Barrat seems more alarmist than rational. Finally, the book is rather hastily organized, and redundant in parts. It could have been a very important book, but aside of valuable references to new research, the thinking doesn't go nearly as deep as Bostrom's seminal opus.

  26. 4 out of 5

    Connor

    While I can see the point that the author was making, he hit the same main point(s) in every chapter, making it repetitive. The author did have logical arguments but to control and rain in an AI that’s self aware is like trying to control a human. You may be able to get a human to do some things you want, but you may not be able to have everything. The same thing will occur with an AI that’s self aware. Another issue with the book is that, while I agree we should be somewhat cautious of developi While I can see the point that the author was making, he hit the same main point(s) in every chapter, making it repetitive. The author did have logical arguments but to control and rain in an AI that’s self aware is like trying to control a human. You may be able to get a human to do some things you want, but you may not be able to have everything. The same thing will occur with an AI that’s self aware. Another issue with the book is that, while I agree we should be somewhat cautious of developing self aware AI’s and/or ASI’s, I felt as if a lot of the book was speculative and as of right now, that’s all anyone can do.

  27. 4 out of 5

    John

    Very interesting book. The author contends that Artificial Super Intelligence(1000 times smarter than humans)may mean the end of mankind. We are creating something that we may not be able to control or contain. Of course the author is speculating on these outcomes from his AGI-level brain (Artificial General Intelligence= that of humans). Great food for thought. Chapter 15 was most interesting as it discussed cyber threats and cyber warfare that are already able to be waged, and have been to a l Very interesting book. The author contends that Artificial Super Intelligence(1000 times smarter than humans)may mean the end of mankind. We are creating something that we may not be able to control or contain. Of course the author is speculating on these outcomes from his AGI-level brain (Artificial General Intelligence= that of humans). Great food for thought. Chapter 15 was most interesting as it discussed cyber threats and cyber warfare that are already able to be waged, and have been to a limited extent.

  28. 4 out of 5

    Patrick Pilz

    James Barrat gives us IT professionals a lot to think about. while we all enjoy the writings of Marvin Minsky or Ray Kurzweil, James Barrat counters with profound concerns about the future of Artificial Intelligence and our own. At times a little depressive, but still a must read to get a balanced education on the future of AI and humanity.

  29. 5 out of 5

    Samantha

    This is a really good book about artificial intelligence and the benefits and possible consequences of it. It makes you question weather or not the production of artificial intelligence is a good idea or not. It also talks about all the different theories scientist have on this subject.

  30. 5 out of 5

    Ngân Kim

    This book depicted a general picture of AI from the early history to future. It helped me to expand knowledge in AI area, and made me think about AI in the way I had never before. However, I found that it was quite hard to follow this book without rereading some chapters, because it contained a lot of terms, names, predictions and opinions. Despite of some solid points, the author just foresaw the future. I do not know what exactly AI may lead humankind to? Heaven (maybe not)? The end (I hope it This book depicted a general picture of AI from the early history to future. It helped me to expand knowledge in AI area, and made me think about AI in the way I had never before. However, I found that it was quite hard to follow this book without rereading some chapters, because it contained a lot of terms, names, predictions and opinions. Despite of some solid points, the author just foresaw the future. I do not know what exactly AI may lead humankind to? Heaven (maybe not)? The end (I hope it will not happen)? But I wholeheartedly believe that we – human will witness a huge leap in technology in the near future and it may change the world forever. If you are curious about how AI can become our final invention and eager to know attitudes of experts or scientists around the world toward the future of AI, or simply if you thrill to discover the world throughout historic stories, it is worth reading.

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.