web site hit counter The Precipice: Existential Risk and the Future of Humanity - Ebooks PDF Online
Hot Best Seller

The Precipice: Existential Risk and the Future of Humanity

Availability: Ready to download

If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come bac If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late. Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity.


Compare

If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come bac If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late. Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity.

30 review for The Precipice: Existential Risk and the Future of Humanity

  1. 4 out of 5

    Stefan Schubert

    This book conveys the Future of Humanity Institute world-view; the result of 15 years of research. It's a whole new way of looking at the world and what our priorities should be. The book is meticulously argued, rich in facts and ideas, surprisingly accessible, and beautifully written.

  2. 5 out of 5

    Jan

    The idea of the whole existence of humanity being threatened got so much attention in sci-fi that for many people it's somewhere in the vicinity of "aliens landing", if not "zombie apocalypse". Toby Ord convincingly argues that this is not the case, the chances that humanity may go extinct in near future are not that small. Actually I think the likelihood of dying because of some global catastrophe is larger than e.g. the risk of dying because of a traffic accident. Taking this seriously could b The idea of the whole existence of humanity being threatened got so much attention in sci-fi that for many people it's somewhere in the vicinity of "aliens landing", if not "zombie apocalypse". Toby Ord convincingly argues that this is not the case, the chances that humanity may go extinct in near future are not that small. Actually I think the likelihood of dying because of some global catastrophe is larger than e.g. the risk of dying because of a traffic accident. Taking this seriously could be quite a perspective-changing event. What's also striking is how tractable is it to do something about some of the risk scenarios. Overall highly recommended.

  3. 4 out of 5

    Fin Moorhouse

    The book discusses the risks of catastrophic events that destroy all or nearly all of humanity’s potential. There are many of them, including but not limited to the Hollywood scenarios that occur to most people: asteroids, supervolcanoes, pandemics (natural and human-engineered), dystopian political ‘lock-in’, runaway climate scenarios, and unaligned artificial general intelligence. The overall risk of an existential catastrophe this century? Roughly one in six, this author guesses: Russian roul The book discusses the risks of catastrophic events that destroy all or nearly all of humanity’s potential. There are many of them, including but not limited to the Hollywood scenarios that occur to most people: asteroids, supervolcanoes, pandemics (natural and human-engineered), dystopian political ‘lock-in’, runaway climate scenarios, and unaligned artificial general intelligence. The overall risk of an existential catastrophe this century? Roughly one in six, this author guesses: Russian roulette. Clearly, mitigating existential risk is not nearly treated like the overwhelmingly important global priority it is: not in our political institutions, nor in popular consciousness. Anyway, it’s excellent– highly recommended. It was also full of some fairly alarming and/or surprising facts. So in place of a full review, here are some highlights: The Biological Weapons Convention is the international body responsible for the continued prohibition of bioweapons, which on Toby’s estimate pose a greater existential risk by an order of magnitude than the combined risk from nuclear war, runaway climate change, asteroids, supervolcanoes, and naturally arising pandemics. Its annual budget is less than that of the average MacDonald’s restaurant. (p.57) Remember that quotation attributed to Einstein that “If the bee disappeared off the surface of the globe then man would only have four years of life left”? Firstly, it’s not true– a recent review found that the loss of all pollinators would create a 3 to 8 percent reduction in global crop production. Secondly, Einstein never said it. (p.118) Technological progress is really hard to predict – “One night in 1933, the world’s pre-eminent expert on atomic science, Ernest Rutherford, declared the idea of harnessing atomic energy to be ‘moonshine’. And the very next morning Leo Szilard discovered the idea of the chain reaction. In 1939, Enrico Fermi told Szilard the chain reaction was but a ‘remote possibility’, and four years later Fermi was personally overseeing the world’s first nuclear reactor.” Furthermore, at the start of the 20th century, many thought heavier-than-air human flight to be impossible. Wilbur Wright was somewhat more optimistic, guessing it to be at least 50 years away; 2 years before he invented it. (p.121) The UK has four levels of ‘biosafety’. The highest level, ‘BSL-4’, is reserved for research involving the most dangerous and infectious pathogens. The 2001 outbreak of foot-and-mouth disease caused economic damages totaling £8 billion and the slaughter of some 6 million animals to halt its spread. Six years later, a lab was researching the disease under BSL-4 security. Another outbreak that year was traced back to a *leaky pipe* in the lab, spreading the disease into the groundwater. “After an investigation, the lab’s license was renewed—only for another leak to occur two weeks later.” (p.130) Nuclear near misses have been *terrifyingly* frequent. The book lists more than ten examples. Here are a few: 1958. “A B-47 bomber accidentally dropped a nuclear bomb over South Carolina, landing in someone’s garden and destroying their house.” (the warhead remained in the plane) 27th October 1962. Four nuclear submarines had been sent by the Soviet Union to support their military operations in Cuba during the height of the Missile Crisis. A US warship detected one of these submarines and tried to force it to surface by using depth charges as ‘warning shots’. The submarine had been underwater for days and had lost radio contact for as long– and with it, information about the situation unfolding above. Moreover, being designed for the Arctic, the submarine was breaking down in the tropical waters. Temperatures ranged from 45°C to 60°C as carbon dioxide began to accumulate. Crew members were falling unconscious. The captain, Valentin Savitsky, guessed by the bombardment that war had broken out. He ordered his crew to prepare the submarine’s nuclear weapon. “On any of the other submarines, this would have sufficed to launch their nuclear weapon. But by the purest luck, submarine B-59 carried the commander of the entire flotilla… [who] refused to grant it. Instead, he talked Captain Savitsky down from his rage.” (p.4) 28th October 1962. The very next day, a US base in and US-occupied Japanese island received by radio an order to launch its nuclear arsenal. “All three parts of the coded order matched the base’s own codes, confirming that it was a genuine order to launch their nuclear weapons.” Captain William Bassett took command and became responsible for executing the order. But he grew suspicious– a pre-emptive strike should already have hit them, and the threat level was set to DEFCON 2 rather than the highest level of DEFCON 1. Yet, he radioed the Missile Operations Centre to check, and received the very same order. A lieutenant in charge of a different launch site told Bassett he had no right to stop the launch given the order was repeated. “In response, Bassett ordered two airmen from an adjacent launch site to run through the underground tunnel to the site where the missiles were being launched, with orders to shoot the lieutenant if he continued without either Bassett’s agreement or a decleration of DEFCON 1”. Bassett then called the Missile Operations Centre again, who this time issued an order to stand down. This story is still disputed and was only made public in 2015. 26 September 1983. Just after midnight, the Soviet early-warning system designed to indicate nuclear launches from the United States showed five ICBMs heading towards Russia. The duty officer, Stanislav Petrov, was under orders to report such a warning to his superiors, who in turn were instructed to retaliate in kind with immediate effect. “For five tense minutes he considered the case, then despite his remaining uncertainty, reported it to his commanders as a false alarm.” (p.96) Norman Borlaug was an American agronomist who developed new high-yield and disease-resistant varieties of wheat during the so-called ‘Green Revolution’. He is often credited for having saved more lives than any person who ever lived. Estimates range from 260 million to over a billion lives saved. (p.97) Finally, the total amount of money expended annually on researching and mitigating existential risks is dwarfed by the amount of money spent annually on ice-cream by about two orders of magnitude. Again, strongly recommended; but not remotely comforting.

  4. 4 out of 5

    Otto Lehto

    From the opening words to the closing paragraphs, you can sense the urgency at the tip of the author's tongue. Mild mannered though his philosophical style may be, there is a sense of apocalyptic poetry in the channeled desperation that drips from the pages like molten candle wax. Even in good times the feeling of existential anguish is no stranger to any sane persons' sensibility. Bad times all the more weigh heavily on our hearts. The extra blanket of terror that has settled down on humanity a From the opening words to the closing paragraphs, you can sense the urgency at the tip of the author's tongue. Mild mannered though his philosophical style may be, there is a sense of apocalyptic poetry in the channeled desperation that drips from the pages like molten candle wax. Even in good times the feeling of existential anguish is no stranger to any sane persons' sensibility. Bad times all the more weigh heavily on our hearts. The extra blanket of terror that has settled down on humanity as a side product of globalization and the nuclear age has permeated our awful nightmares. What will a human future look like? Do we even HAVE a future? Contemplating our demise, the eternal darkness of humanity, can be paralyzing, senseless, and necessary. Toby Ord's greatest achievement in The Precipice is to play the "existential dread" card tactfully. You can see that the author is a philosopher who prefers reason to passions. He allows the feeling of anguish to momentarily tug on our terrified heart strings in order to motivate our passions sufficiently to get the reasoning going (following the sentimentalism of David Hume). But he then forces the passions to abscond while he rolls out the red carpet for calculating reason as soon as the analysis demands it. As a result, the analysis is mostly confined to a "safe space INSIDE terror," a space of statistics and numbers, a court of motivated reasoning surrounded by a sea of terror. The book does not shy away from controversial but empirically supported positions. For example, Ord emphasizes that global warming is only one among many threats that we face and that it has only limited capacity to pose an existential threat to humanity (as opposed to a severe but nonlethal economic, social, and environmental threat of the kind that is already clearly does). He does not shy away from showcasing the irrationality of the opposition to civilian nuclear power while highlighting the pervasive and lingering threat of nuclear genocide through war. He uses credible statistics to show that an asteroid impact, although a real existential threat in the long run, is not an imminent one, so we should be more urgently worried about other threats. He ends up emphasizing the existential threat of A.I. and technological development as perhaps the most potent threat that humanity faces in a time when few people seem to care about it much. Powerfully, to motivate long term survival, the author sings sweet songs about the untapped potential of human evolution throughout the book, and especially at the final chapters. There is a surprising amount of visionary thinking going on, not only in terms of dystopian futures and species extinction, but also about potential lands of milk and honey at the end of history. This hammers home the real opportunity cost of extinction: the negation of future utopias. And I agree: unimaginable utopias have the potential of being realized if only we play our cards right. Preventing the extinction of human potential as the fountainhead of future development is a much greater reason to keep on the straight and narrow than the survival of the flawed present. This, however, is a realization that does not come naturally to most people without extensive motivation. Following from the above, I have my disagreements with Ord's methodology and conclusions. For example, Ord attempts to show that we should care about future generations (almost) as much as we care about today's generations. I think that this is practically hard to motivate. Most people simply cannot be bothered to think about future generations or to know what to make of it. Furthermore, if we really took unborn lives to be worth so much, how can we justify abortion or even contraception? Even abstinence would be equivalent to a minor genocide, it seems. It is not obvious that we should care about potential people who do not exist and may never exist. He also takes some cheap shots at Epicurus's arguments about why we shouldn't worry about death without bothering to argue against it. As a philosopher himself, I would have expected more of him. Secondly, the category of existential risk downplays all risks that fall short of it but seem to be something we should worry about. This means that the category is perhaps too strict and not very illuminating. It says little about threats that allow humanity to survive but in a crippled or miserable state. Thinking about existential risk as a separate category from ordinary risks that annihilates the very possibility of humanity is certainly a useful distinction to make. But I feel that, for many people, there isn't much of a difference between, say, a nuclear winter that annihilates 95% of humanity and an existential catastrophe that wipes out 100% of us. If people could avert the latter by bringing about the former (especially if they themselves and all their loved ones would be wiped out in the process), I feel that it wouldn't be much of a consolation. So, I think that existential risks and near-apocalyptic catastrophes are pretty much in the same ballpark of "super awful". It is going to be hard to convince many people, myself included, otherwise. All that said, Ord's book shines as a warning beacon AND a Promethean torch. The book's ambiguous message combines techno-utopian hope with techno-dystopian terror. It challenges our common assumptions about which particular categories of risk should we most worry about by laying down some facts and figures that are bound to be instructive. And it contains a criticism of our short-sighted institutions. Markets are driven by quarterly capitalism and politics is driven by the electoral cycle. None of this is capable of thinking decades, let alone centuries, ahead. But the lessons are crucial. Some of the math about expected costs and benefits is hard to compute, and we can debate the normative dimension of caring about future generations, but we should learn to recognize the category of existential risk as a separate policy task that demands care.

  5. 4 out of 5

    Jake

    The following excerpt from Toby Orb's book is from his estimated probabilities that within the next 100 years our entire species will go extinct, or our civilization will collapse to irreparable degree: “Existential catastrophe via: Asteroid or comet impact ∼ 1 in 1,000,000 Existential catastrophe via: Supervolcanic eruption ∼ 1 in 10,000 Existential catastrophe via: Stellar explosion ∼ 1 in 1,000,000,000 Existential catastrophe via: Total natural risk ∼ 1 in 10,000 Existential catastrophe via: Nucle The following excerpt from Toby Orb's book is from his estimated probabilities that within the next 100 years our entire species will go extinct, or our civilization will collapse to irreparable degree: “Existential catastrophe via: Asteroid or comet impact ∼ 1 in 1,000,000 Existential catastrophe via: Supervolcanic eruption ∼ 1 in 10,000 Existential catastrophe via: Stellar explosion ∼ 1 in 1,000,000,000 Existential catastrophe via: Total natural risk ∼ 1 in 10,000 Existential catastrophe via: Nuclear war ∼ 1 in 1,000 Existential catastrophe via: Climate change ∼ 1 in 1,000 Existential catastrophe via: Other environmental damage ∼ 1 in 1,000 Existential catastrophe via: “Naturally” arising pandemics ∼ 1 in 10,000 Existential catastrophe via: Engineered pandemics ∼ 1 in 30 Existential catastrophe via: Unaligned artificial intelligence ∼ 1 in 10 Existential catastrophe via: Unforeseen anthropogenic risks ∼ 1 in 30 Existential catastrophe via: Other anthropogenic risks ∼ 1 in 50 Existential catastrophe via: Total anthropogenic risk ∼ 1 in 6 Existential catastrophe via: Total existential risk ∼ 1 in 6” Toby Ord. “The Precipice.” The appearance of covid-19 is the first time in my living memory that a single event has created massive seismic ripples across the world stage. It has, stunted economies, led to massive political fights, misinformation campaigns, a rise of an appreciation of authoritarianism, and at this time of this writing, it has taken nearly a quarter million lives. We, are, as can be seen here, still vulnerable to pestilence. Of course, though, we must be aware, that these vulnerabilities, this unprotected underbelly of our global societies as susceptible to many more dangers than covid-19. The book "The precipice” is a great study of these dangers. This book continues the tradition of viewing humanity in its history of deep time- that being the history since its inception something 200,000 years ago in Africa from arboreal apes and other hominoids to the future. This carried on the tradition popularly expressed in the books of Yuval Harari and others arguably like Sagan, Tyson, Asimov, and at times smil, who take a grand cosmic view of our species. Ord describes a bit of our history and soon moves into an explanation of our potential futures. These futures range from the total annihilation of the species, to one of his musings early on in the book where he said : “The future of a prosperous humanity is extraordinarily bright”. This schizoid perspective stems from his reflection as to what he sees as our prospects. Joining of course the popular trend these days of futurists of whose ideas are that now is the most important times in history. This is because we are presently close temporally to a massive list of breakthrough innovations, from biotech in CRISPR, deep brain stimulation and prosthetics, to robotics and A.I. We are at the infancies of these technologies, but further we are at the infancy of our wisdom with the possibility with very advanced tech. He further advances his thesis by discussing our ever increasing ability to harness power which naturally brings to mind Smil’s grand text : https://www.goodreads.com/book/show/3.... All in all the thesis stands, we are at a point in history that he refers to as the precipice- this is where we are at a point where given our technological progress we can choose to prevent our selves from falling into massive disaster, or we can trudge forward ignoring all possible signs of danger. If you look at the quotation form the book above, you will seen the many possible ways in which we as a species can reach an existential risk which can strictly be defined as either our extinction or by a disaster which sends the few surviving members of our species in the technological equivalent of a dark age. Now of course, one may say that is a bit melodramatic but he makes the case powerfully. The story of when our cosmologically young species reached this point in history was shy after Rutherford called the liberation of energy from an atom moonshine, and Einstein and Szilard sent out the famous letter to Roosevelt . It was at the first testing of the atomic bomb. At this point in history it became clear that we as a species had given ourselves a distinct ability to bring about our destruction . Or to quote the oft mentioned line mumbled by the late leader of the manhattan project - Oppenheimer - upon reflecting on the bomb’s creation : “Now I am become Death, the destroyer of worlds”. The proverbial doomsday clock moved closer to midnight at that moment of its testing. And of course, beyond Hiroshima and Nagasaki, when nuclear information not only spread, but the curbing of nuclear proliferation failed, certain moments made it clear that our species was in danger. First the story the book opened up with : https://en.m.wikipedia.org/wiki/1983_... And of course, the cuban missile crisis, which some have said we only survived, not by the genius of Kennedy, nor the theory of MAD being a deference but by dumb luck. Sadly though, it is not only nuclear missiles which could cause an issue, but in his eyes, the possibility of an engineered pandemic (no I’m not standing by the idea that covid -19 was built in a lab.), climate change, a great war between nations, and possibly most frighteningly: A.I alignment. Which in short, is the idea that A.I may be build with the wrong initial programs to function well with humanity. This is best expressed in the following book from Stuart Russell :https://www.goodreads.com/en/book/sho... Overall, in his assumption our greatest danger is ourselves. I should end this review with admitting that despite the nature of the grim topic, Ord was actually quite optimistic. Like in the following two books written by pinker: https://www.goodreads.com/book/show/1... https://www.goodreads.com/book/show/3... …Ord discusses what appears to almost be a moral compass of humanities improvement and that by most metrics humanity has improved. He then extends the same idea forward saying, perhaps we can save ourselves when it is not to late, and come up with the social infrastructure to handle all risks to our species natural and artificial, nuke, and astroid, by simply working together and focusing on the right things, just has , in his eyes, we have worked together and abolished a great deal of violence, infectious disease, increased drinking water, decreased child mortality ect. He states that existential risk is a fairly new issue, and that for most of human history this was not a thing people had to worry about. It is in his eyes, that its study began with early letters such as from Betrand Russell and Einstein warning about nuclear missiles : https://en.wikipedia.org/wiki/Russell.... He hopes that at some point humanity may gain wisdom to the level of our great intellect and perhaps we can save ourselves from the oncoming peril. I should add that this is quite a weird book, in that he does not make this hypothetical but recommends exact websites to help humanity prepare for a possible hellish storm: Effectivealtruism.com and 80,000 hours.com He further even mentions possible solutions. Here are some: Better communication across great nations may decrease nuclear war (like the red telephone in the cold war) create a world government? Or a 3rd party to be a watch dog for threats. Fix institutions related to risk we can fix The Who and it’s ability to respond to pandemics (Yes he said this before covid.) DNA synthesis screened for dangerous pathogens He then continues to even mention jobs that the reader can go for. If you’re into computer science, you can study A.I to make sure things don’t go to shit, and if you are in Bio/medicine you can switch over to studying pandemics (I know..). Overall, this was a brilliantly researched book, and I highly recommend everyone read it.

  6. 4 out of 5

    Ben Chugg

    It has taken me some time to sort through my feelings on this book. Toby Ord is smart, articulate, and makes a compelling case for longtermism: The thesis that future generations deserve moral patienthood and that they should be factored into our moral calculus; that we should take their welfare seriously, and ensuring that the future goes as well as possible should be one (if not the largest) of our moral concerns. Clearly, the consequences of adopting this worldview could result in a reshuffli It has taken me some time to sort through my feelings on this book. Toby Ord is smart, articulate, and makes a compelling case for longtermism: The thesis that future generations deserve moral patienthood and that they should be factored into our moral calculus; that we should take their welfare seriously, and ensuring that the future goes as well as possible should be one (if not the largest) of our moral concerns. Clearly, the consequences of adopting this worldview could result in a reshuffling of our priorities. It is thus a thesis worth examining, and one of which everyone should be made aware. Future generations are certainly neglected in many, perhaps most, of our institutions and decision making processes. While they inherit the status quo, they have no say in creating it. They have no representation in government, no power to protest, no ability to vote. While this situation is difficult to remedy, one way in which we can help future generations is to ensure their existence by not destroying humanity; i.e., we should mitigate existential risks. Cataloguing such risks and ordering them by their likelihood is the theme of the book. However, some of the reasoning style and the underlying logic is … odd. The book is steeped in Bayesian epistemology (unapologetically, I believe, for this seems to be the implicit philosophy in effective altruism circles), and draws no distinction between objective and subjective probability. Ord begins by examining “natural existential risks” (e.g., super-volcanoes, asteroids). To get a handle on their likelihood, he uses data based on previous frequencies (how volcanoes erupt, how many asteroids hit earth, etc). Upon switching to “anthropogenic risks” he ditches frequentism (there are no data points for nuclear armageddon) and adopts subjectivism: he tries to quantify his belief that the world will end in one of several ways. This switch is subtle and is left unacknowledged. Indeed, both kinds of probabilities are expected to be treated in the same way by the reader (as exemplified by Table 6.1, in which all of these numbers are compared). Bayesians will undoubtedly respond that there is no other way to reason about such one-off-events; subjective probability is all we have. I am becoming increasingly convinced that this is wrong, and moreover, quite dangerous. The alternative, I think, is to acknowledge that some future scenarios are steeped in so much uncertainty that attempting to generate the most accurate number to capture the future (i.e., a prior), is quite pointless. Numbers are not primitive; ideas and arguments are. Knowing Ord’s subjective belief that AI will take over the world is unhelpful; knowing his best argument is. At times the book is filled with such arguments; at others arguments are replaced by priors, a move I find unprofitable. (I should say that I was once sympathetic with Bayesian reasoning, but have been slowly beaten over the head with alternatives. Many of the above criticisms come directly from those conversations.) In sum, the book expounds an important idea, but analyzes it in worrying ways.

  7. 4 out of 5

    Vidur Kapur

    This is a beautifully written work that calls on humanity to secure its longterm future, reflect on what it wants to achieve once secure, and then go on to fulfil its potential. It's rigorous and well-sourced, with a huge proportion of the book being taken up by appendices and endnotes. It contains mathematics, but it's accessible to an educated non-specialist. Overall, it makes a strong case for the proposition that there are "possible heights of flourishing far beyond the status quo", and that This is a beautifully written work that calls on humanity to secure its longterm future, reflect on what it wants to achieve once secure, and then go on to fulfil its potential. It's rigorous and well-sourced, with a huge proportion of the book being taken up by appendices and endnotes. It contains mathematics, but it's accessible to an educated non-specialist. Overall, it makes a strong case for the proposition that there are "possible heights of flourishing far beyond the status quo", and that "our descendants could have aeons to explore these heights". It is therefore urgent, Ord argues, that we direct more of our resources to tackling risks that could jeopardise this future. Many of these risks have only arisen relatively recently; as a species, we've acquired tremendous power, without commensurate wisdom. My only quibble is that the book could have explored in more detail why it is that many think the future will be net-positive, and more explicitly answered some of the objections to focusing on existential risk reduction as opposed to, say, moral circle expansion. That is, should we focus on securing humanity's future, or on making it better conditional on it continuing? Ord does make some arguments in this space, looking at historical trends which suggest that humanity has made moral progress; talking about the idea of preserving option value; distinguishing between broad interventions and narrow interventions; and making the case for a "Long Reflection" on our values and goals after we've achieved existential security. For example, if humanity's longterm potential is destroyed, it's irrevocable. Surely it is better, Ord argues, to let our descendants make judgments that we're not in a position, epistemically, to make. And to take the third example, Ord doesn't advocate solely focusing on "narrowly targeted interventions"; as he notes, existential risk "can also be reduced by broader interventions aimed at generally improving wisdom, decision-making or international cooperation", and that "it is an open question which of these approaches is more effective". Indeed, because Ord doesn't narrowly focus on extinction risk, but rather on existential catastrophe as a whole, which includes scenarios involving an unrecoverable collapse or an unrecoverable dystopia, there may be more overlap between existential risk reduction and moral circle expansion than has sometimes been assumed. Overall, this is a book packed with interesting information, with insights from physics, economics, and moral philosophy that the reader won't have encountered before. Highly recommended.

  8. 5 out of 5

    Tony Milani

    This book is generally optimistic about the future potential of humanity, and provides some useful historical context on past events that represented possible existential risks that turned out (obviously) to not fully materialize. If this weren't published and read at a time of global crisis where institutions around the world, including those Ord places as central to the safeguarding of humanity's future, are collapsing or otherwise revealing their lack of concern for knock-on effects related t This book is generally optimistic about the future potential of humanity, and provides some useful historical context on past events that represented possible existential risks that turned out (obviously) to not fully materialize. If this weren't published and read at a time of global crisis where institutions around the world, including those Ord places as central to the safeguarding of humanity's future, are collapsing or otherwise revealing their lack of concern for knock-on effects related to greed and political expediency, I might not be so down on it. The prescriptive roadmap he lays out is sensible but comes with the huge caveat of requiring international cooperation and goal alignment to a degree that appears more and more unattainable in the current political climate. The writing is also very dry. This book was not an engaging read, and only a little more than half of the physical real estate in the book is actually "the book," with the remainder being appendices, notes, and citations.

  9. 4 out of 5

    Luke Freeman

    How will our species survive and thrive? This is one of my favourite non-fiction reads to date. I love how Toby Ord leaves hype behind and uses sound reasoning to lay out the case for existential risk reduction. He has very strong arguments and a enjoyable writing style. He makes the case that existential risks are some of the the most important and neglected problems we face. He covers major risks from nuclear and biological weapons to climate change, pandemics, artificial intelligence, asteroid How will our species survive and thrive? This is one of my favourite non-fiction reads to date. I love how Toby Ord leaves hype behind and uses sound reasoning to lay out the case for existential risk reduction. He has very strong arguments and a enjoyable writing style. He makes the case that existential risks are some of the the most important and neglected problems we face. He covers major risks from nuclear and biological weapons to climate change, pandemics, artificial intelligence, asteroid impacts, and more.

  10. 5 out of 5

    Kyran

    Solid, scholarly work that provides a helpful framework for thinking about ‘existential risk’ – major and irreversible damage to humanity’s potential (ie extinction or permanent civilisational collapse). Covers topics like nuclear war, climate change, pandemics and unaligned artificial intelligence. The book’s coverage of the last two are particularly interesting, and Ord believes the latter to be quite significantly the most dangerous of the risks presented (to an extent which surprised but mos Solid, scholarly work that provides a helpful framework for thinking about ‘existential risk’ – major and irreversible damage to humanity’s potential (ie extinction or permanent civilisational collapse). Covers topics like nuclear war, climate change, pandemics and unaligned artificial intelligence. The book’s coverage of the last two are particularly interesting, and Ord believes the latter to be quite significantly the most dangerous of the risks presented (to an extent which surprised but mostly convinced me). This is an emerging cross-disciplinary field and The Precipice is a great introduction to the issues at stake.

  11. 5 out of 5

    Nilesh

    Precipice nicely catalogs existential risks. It does a commendable job in separating the genuine extermination events from others that could be catastrophic but won't result in our complete annihilation. That said, neither of the two things that the author attempts results in a lasting impression or change for ordinary readers: The author lists about a dozen different things that could wipe out humanity. Expectedly, most readers are likely to be well aware of all these nightmares. The individual s Precipice nicely catalogs existential risks. It does a commendable job in separating the genuine extermination events from others that could be catastrophic but won't result in our complete annihilation. That said, neither of the two things that the author attempts results in a lasting impression or change for ordinary readers: The author lists about a dozen different things that could wipe out humanity. Expectedly, most readers are likely to be well aware of all these nightmares. The individual sections are brief and with hardly any new details compared to what one may know from general newspaper articles on those subjects or even from Hollywood movies (almost every one of those risks have had movies made on them). The section quantifying the risk is not only extraordinarily subjective but nihilistically pointless to a degree. The author is right in explaining why the subjective nature of the quantification should not become a reason for not doing the exercise. Yet, this does not remove the fact that these probabilities are not of much use to ordinary readers. And, it also does not eliminate the need for the theoreticians to find a way to agree on the ranges rather than each one espousing his or her own set. It is sensible for the author to appeal for far more resources to prepare us against the worst of these risks. Yet, the fight against these is unlikely to be top-down with some big global institution fighting all of them centrally. The work will be bottom-up, decentralized in various nations and regions, in different set-ups for different risks and continually evolving. For example, it is fanciful to assume that those protecting against volcanic threats should also work under the same umbrella as those highlighting the need for environmental clean-up or monitoring asteroids. When one looks at the bottom-up work being done in the fields mentioned, the resources devoted are nowhere as pitiful as the author makes them out to be. Undoubtedly, a lot more needs to be done on larger existential threats - most of which are anthropogenic - like in the climate change arena, the pandemic threats, containing the AI and global disarmament. Like every individual human, every life form or specie, just life the stars and the galaxies or even the universe will end one day. We still need to ensure - as the author says - that most of our race's evolution is in the future and not behind. The book should be commended for picking up on this critical point.

  12. 4 out of 5

    Lina

    I have never thought about the future in terms of millions of years ahead - now I do. Toby Ord's book made a lucid, well-argued case that now is the time to think about the future of humanity, it we want it to stay on the right tracks. Yet, I found myself struggling to accept Ord's idealistic view, which felt romantic, and over-idealised, to me. Bottom line - the only thing stopping us from achieving this glorious future is temporal discounting. Once we free ourselves from this bias and start co I have never thought about the future in terms of millions of years ahead - now I do. Toby Ord's book made a lucid, well-argued case that now is the time to think about the future of humanity, it we want it to stay on the right tracks. Yet, I found myself struggling to accept Ord's idealistic view, which felt romantic, and over-idealised, to me. Bottom line - the only thing stopping us from achieving this glorious future is temporal discounting. Once we free ourselves from this bias and start considering, planning and adjusting our behaviour accordingly, all will be well. This fails to account for the fact that temporal discounting is one of the most powerful, deeply entrenched human cognitive biases. On average, we simply cannot think differently - what matters is here and now, and maybe a little bit in the future, but whatever is on the far horizon is less important (as a young species, as Ord says, this is residue from our early days). Unfortunately, observing politics, economics, management, and other aspects of life (even science), I would be inclined to say that we cannot overcome that way of thinking in time - and that's before we even address the issue of multiple disciplines, theoretical and applied, working together for a common goal! Just see what happens annually during a Climate Summit - not exactly inspiring us with hope for humanity's ability to overcome the climate emergency, not to mention any other related emergency. This will make it hard, if not impossible, to safeguard thousands or more years of humanity. Ord's vision is glorious - I wish I could share it!

  13. 4 out of 5

    TheBookWarren

    4.25 Stars - This really is a cracker of a book!!! From the moment I picked up Toby Ord’s nonfiction on the state and future of humanity, I knew full well it was going to be something to sink your teeth into.. But was I wrong.. because it’s more than that, it’s to behold, to intrigue, to ponder and cogitate with at nights & then most of all it gives the urge for one to immediately share, share with as many as you possibly can, as the more that read this modern literally stunner the more that see 4.25 Stars - This really is a cracker of a book!!! From the moment I picked up Toby Ord’s nonfiction on the state and future of humanity, I knew full well it was going to be something to sink your teeth into.. But was I wrong.. because it’s more than that, it’s to behold, to intrigue, to ponder and cogitate with at nights & then most of all it gives the urge for one to immediately share, share with as many as you possibly can, as the more that read this modern literally stunner the more that see the same “Precipice” the same signs that we’ve indeed reached that precipice, So what are we going to do about it? There are components of Ord’s premise that I disagree with, no doubt and I for one feel the world is in nowhere near as grim a state as many believe, but it’s vital a guiding coalition is built toward correcting a number of environmental and social challenges before it is too late!

  14. 4 out of 5

    Kolumbina

    A well written book by Australian researcher, Toby Ord (works and lives in England, in Oxford), about potential, existential risks which humanity could experience. Really interesting, a rich book, heaps of references and appendices, an easy read, I found it very disturbing. A useful book, everyone should read it, especially now, after coronavirus pandemic.

  15. 4 out of 5

    Patrick Kelly

    The Precipice By Toby Ord - [ ] An effective altruism book - [ ] We have time but it is running out - [ ] Is there hope? Toby is optimistic but worried - [ ] In all areas literacy, life expectancy, quality of life, standard of living, etc. There have been dramatic improvements across the history of the human species. Each revolution (agricultural, scientific, industrial, technological) has brought about these improvements. Before the industrial revolution 19 out of 20 people lived in less than $2/ The Precipice By Toby Ord - [ ] An effective altruism book - [ ] We have time but it is running out - [ ] Is there hope? Toby is optimistic but worried - [ ] In all areas literacy, life expectancy, quality of life, standard of living, etc. There have been dramatic improvements across the history of the human species. Each revolution (agricultural, scientific, industrial, technological) has brought about these improvements. Before the industrial revolution 19 out of 20 people lived in less than $2/day, now it is only 1 out of 10. Before the scientific revolution, few people knew how to read, now there are few that don’t. These revolutions have dramatically improved human life. (One could also make the argument that capitalism instead of communism/fascism has been a great driver of this improvement) We are destroying the planet and destroying ourselves, but our lives have gotten better. - [ ] The world changed when the atomic bomb was created and dropped. We have these weapons and the longer we live, the more likely that they will be used again. - [ ] I live in a post Cold War world, nuclear weapons have never been a big threat. I always laughed at Iran/North Korea getting them. But the more I read Sagan/Effective Altruism/foreign policy stuff, the greater I take the issue of nuclear weapons and nuclear nonproliferation - [ ] He thinks that in the 20th century there was a 1:20 chance of extinction. In the 21st century there is a 1:6. - [ ] A moral argument/awareness of future generations. An obligation to survive. There are far more people with potential to live then there are people that have lived. By destroying ourselves now, we are destroying the future. It is the future that we must think of and we owe it to our ancestors to pass it on - [ ] It is very spiritual/aware/beautiful - [ ] His parents tell him that he does not repay them, he just passes it on - [ ] We have to think about the collective, of the whole, the group, and not the individual survival - [ ] Existential event - [ ] Civilization collapse event: an event where civilization collapses, similar to the events described by Graham Hancock. Ord states that even with Europe losing 25-50% of its population due to the Black Death, civilization did not collapse and Europe recovered. It’s unlikely that civilization could completely collapse and not recover, if it did then extinction would follow. Based on what I have read from Hancock, I don’t fully support Ord here - [ ] Extinction event: humanity is lost and all or almost all humans die. This could come in a single event (nuclear war) or a combination of events (pandemic/economic collapse/climate change) - [ ] Ord believes that we have the potential for millions of future generations and to live for billions of years - [ ] Sagan and two others are mentioned for their work on global catastrophic risks/nuclear war/our cosmic significance. (I have to check who the other two people are) - [ ] Arguments about why global catastrophic risks (gcr) are not funded more/given enough attention/etc. Relevance, personal impact, politics, etc. - [ ] Now on to the issues - [ ] Asteroid/comet threats are the best addressed and studied issue. They are well funded, well understood, and given the proper attention. We have actually identified 95% of the 1km-10km objects and almost all of the 10km objects. - [ ] Super volcanos - [ ] The precipice began at the trinity test - [ ] Another big section on nuclear war/nuclear winter - [ ] Climate change: he does not see climate change being a direct cause for existential risk, because he believes that even in the worst case some humans will survive but believes that other factors around climate change could be a cause. - [ ] There is a thing about humans needs a proper temperature to regulate our body temperature with regards to sweat and our system. It has to do with a combination of humidity and temperature. There will be a point where certain parts of the world are uninhabitable with AC. - [ ] There is the possibility of 9-13 degree rise by 2300. The climate models are highly unpredictable and there are many factors to consider. - [ ] Runaway green house has effect. Permafrost will partially melt and that will be a factor in climate change. Will be at least produce double the emissions that we already have by 2100, with the possibility of producing 5-13 times more. - [ ] Biodiversity loss - [ ] Climate degradation - [ ] The climate stats and factors are extreme. I disagree with him and I believe it is an existential risk. - [ ] There is a big problem with climate change in that we could be currently locking ourselves into a situation where we can’t reverse and the situation will be existential and we are powerless to stop it - [ ] Pandemics: the Black Death killed 25-50% of the European population. Europeans coming to the new world could have killed up to 90% of native Americans and 10% of the world population. - [ ] Bio weapons have been used throughout human history - [ ] 15 countries have create biological weapons programs with Russia being the biggest and at one point employing over 9,000 scientists on it. Why the fuck is this not talked about more?!?! - [ ] Biological weapons, trying to make diseases more deadly and easier spread - [ ] Biosecurity is weak and in the past 50 years there have been multiple instances of small pox and anthrax getting out of the lab and infecting people. This is terrifying - [ ] There is little investment and weak accountability in biosecurity. The Biological Weapons Convention is weakly enforced and not well funded. This book has made me realize how big of an issue biological weapons are and how underserved they are. - [ ] Normal Borlock and his wheat may have saved billions of lives - [ ] AI: the people that are most worried about it are the AI experts. AI could effectively become immortal by hiding in computers around the world. It could access bank accounts, social media, government systems, and surveillance. It could then blackmail and pay off any one in the world. This sounds like Ultron. Fuck, I don’t understand AI but every time I learn about it, it sounds terrifying. - [ ] Most experts think it is decades away, no years. Many think that there is a 5% chance of existential risk from AI. That is an insanely high number, there is a 5% chance that the ultimate goal of AI is an existential risk - [ ] Anthropogenic caused risks - [ ] Totalitarian dystopian regimes - [ ] Unexpected risks caused by humans are a big threat. IE: the next nuclear bomb or a yet undeveloped technology - [ ] Here are the numbers: - [ ] The five biggest threats over the next century: nuclear war, climate change, environment degradation, unforeseen human created risk, AI - [ ] He puts AI at 1:10 - [ ] Overall possibility of existential risk event by the end of this century - 1:6 but it can raise to 1:3 - [ ] He does have hope and believes that humans can pull back from the brink but it will take effort. Currently we are playing a game of Russian Roulette - [ ] There is some simple math that he is using but I don’t have the details for that - [ ] Some effective altruism principles, mentions possible careers, and how to organize risk management/addressing risks - [ ] Our current goal is to prevent an existential event, once we do that then we can secure our long term future. But between them is something called ‘the great reflection’, a time when humanity can reflect and chart a collective path forward. Once we are secure, then we choose where we go. - [ ] This seems like philosophical idealistic babble. It sounds like an ideal world where humanity can come together and take steps forward. I don’t believe this will ever happen. I don’t believe that we will ever get rid of existential risks and I don’t believe that we will ever come together and chart a collective path forward. I only believe this can happen after a massive cataclysmic event that forces the remaining bits of human civilization to reckon with what it has done and then in solace move forward as it is crippled. Basically when the United States decided to go on the offense in the Zombie War and eventually rallied most of the world to do the same. - [ ] Tragedy of the commons and game theory Believes that the pace of technology is outpacing our ability to slow it down. Thus we must build altruistic/positive technology to prevent existential risks, instead of building those that cause it - [ ] Some 80,000 career stuff - [ ] We are a young species. The horseshoe crab has had an unbroken line for 400 million years. Blue green algae has been around for over two billion. Wait when did life first appear and when did complex life appear? - [ ] He starts to talk about the cosmos, exploring the universe, the massive potential long term survival of our species on hundreds of millions of years scale, he mentions Pioneer and Voyager, he sounds hopeful, he is writing like Sagan and even using similar phrases. This is the first time I have really seen Sagan’s writing/professional/non celebrity work directly influence another academic - [ ] The average life time of a species is 1-10 million years. Ord is saying we have the potential to live billions of years, almost as if our species can become effectively immortal (if we prevent existential risk) I have never heard these ideas seriously discussed - [ ] The observable universe is spread out over 46 billion years - [ ] Explore our entire galaxy in 100 million years - [ ] This is all fun to imagine/play with but the reality is that we have 1:6 chance in an existential event happening in the 21st century. Fuck the stars and exploring the universe, we have to make our stand now and fix/save/prevent existential risk now, here on earth - [ ] He does end on a hopeful note. He is cautiously optimistic about our future. He loves humanity and loves our potential. He loves what we can do and could do. He speaks about the love he has for his daughter and all of the beautiful moments that life has. That there can be and will be more. That if we prevent existential risks, there should be nothing stopping us. He even talks about our evolution, the possibility of biotechnology, a cognitive evolution. He is very aware of animals, different experiences, things that we don’t know but could, he thinks of the collective. In many ways he is a ‘cosmic humanist,’ similar to Sagan. I am a fan - [ ] I enjoyed this book and would recommend it to others. It is assessable, well written, direct, dire, clear, and hopeful. It is not arrogant, off putting, or highly academically written, in ways that other EA literature is. I would like to see the EA movement follow Toby’s lead

  16. 5 out of 5

    Max

    Interviewer: Suppose that your life's work ended up having a negative impact. What's the most likely scenario under which this might happen? Ord: [...] I think people underestimate how easy this can happen. [...] The easiest way this could happen is your work crowding out something that is better. I thought a lot about that when writing this book. I really wanted to make sure that there wasn't anyone else who should write this book instead of me, I talked to a lot of people about it. Because even Interviewer: Suppose that your life's work ended up having a negative impact. What's the most likely scenario under which this might happen? Ord: [...] I think people underestimate how easy this can happen. [...] The easiest way this could happen is your work crowding out something that is better. I thought a lot about that when writing this book. I really wanted to make sure that there wasn't anyone else who should write this book instead of me, I talked to a lot of people about it. Because even though you produce something really good, if you crowd out something extremely good, your overall impact could be extremely negative. This should give you an idea about the level of care Toby Ord took with this phenomenal book. The only imaginable extremely good book that got crowded out by "The Precipice" is "The Precipice - Now with Improved Footnotes". There are so many of them. In the end I just read all the footnotes after finishing a chapter, as to avoid the disappointment of another source citation or "I owe this point to Nick Bostrom". I'm deeply impressed with Toby Ord and the research team behind him. This book does a great job at making the importance of our long-term future palpable and sketch out the risks that we have to overcome in the coming centuries. I think I'll leave it at that. Just another quote that I just "loved" and that's emblematic of the book, and also of the grace with which humanity is handling existential risks so far. A technician [at a bioweapons lab in one of the Soviet Union’s biggest cities, Sverdlovsk,] had removed a clogged air filter for cleaning. He left a note, but it didn’t get entered in the main logbook. So they turned the anthrax drying machines on at the start of the next shift and blew anthrax out over the city for several hours before someone noticed[, killing at least 66 citizens]. In a report on the accident, US microbiologist Raymond Zilinskas (1983) remarked: ‘No nation would be so stupid as to locate a biological warfare facility within an approachable distance from a major population center.’ Humanity today showcases too much "No civilized species would be so stupid as to". Toby Ord did a great service to helping us make some smart corrections to our path going forward.

  17. 5 out of 5

    Peter

    This is one of my favorite books. Delightfully written, inspiring, and lots to learn even for folks well-read around the incredibly important issue of longtermism - how civilisation can not only make it through the 21st century but go on to flourish.

  18. 5 out of 5

    Julian Schrittwieser

    Very timely book, chock full of appendices and notes if you want to dig deeper.

  19. 4 out of 5

    James Giammona

    A clear and compelling introduction to the philosophical arguments and scientific research on existential risks. I enjoyed the exploration of understanding existential risks from various moral frameworks, the calculations of asteroid risk, supervolcano risk, and general natural existential risk implied by the longevity of mammalian species or the observation of mass extinction events. His examination of anthropogenic risks was also quick but good. I appreciate that Toby gives his current beliefs A clear and compelling introduction to the philosophical arguments and scientific research on existential risks. I enjoyed the exploration of understanding existential risks from various moral frameworks, the calculations of asteroid risk, supervolcano risk, and general natural existential risk implied by the longevity of mammalian species or the observation of mass extinction events. His examination of anthropogenic risks was also quick but good. I appreciate that Toby gives his current beliefs that total existential risk this century is 1 in 6. However, such an extremely high value (mainly caused by AGI risk) I think requires more support than he gave. While I am sympathetic to his viewpoint, I think his stated arguments are weak. Of course, he has a lot of hard to convey reasons to believe this, but the arguments here were certainly less clear and compelling than other sections. The Grand Strategy of Humanity section was again clear (Survive, Figure out what is worth doing, Thrive) but quite short and the advice for ways individuals can contribute felt a bit trite. The last section on Grand Futures was quite clear and good although unfortunately it shied away from fully embracing some of the more radical implications of using all that material and energy to run computation. Discussion spent a long time trying to preserve Earth, which I guess is the way to introduce these ideas to the conservative layperson. I'm looking forward to Anders Sandberg's deep dive into this topic. Overall, I'd rate it as of similar quality to Bostrom's Superintelligence. I think it succeeds in introducing all of these ideas to an audience that hasn't encountered them before. I'd recommend it to anyone wondering what's really worth doing!

  20. 5 out of 5

    Jonny

    It was a really welcome break reading something so optimistic about humanity’s ability to transcend its own current reality, and to plan sufficiently far ahead so as to create time & space for a more ambitious future. Ord makes a lot of really good, intuitive points (world as exists today would have seemed as unimaginable for people 800 years ago as the realities of interstellar travel would for us today - if not more so). But it’s still hard to avoid feeling that his premise is just too optimis It was a really welcome break reading something so optimistic about humanity’s ability to transcend its own current reality, and to plan sufficiently far ahead so as to create time & space for a more ambitious future. Ord makes a lot of really good, intuitive points (world as exists today would have seemed as unimaginable for people 800 years ago as the realities of interstellar travel would for us today - if not more so). But it’s still hard to avoid feeling that his premise is just too optimistic. When we make massive scientific and technological leaps, it isn’t normally because we’re following a plan - it’s because the opportunity presents itself and someone takes it. That may well be the best way of driving human progress - but it’s definitely not compatible with the sort of strategic approach to worldwide coordination of priorities that he has in mind.

  21. 5 out of 5

    Stephanie Guerra

    This book really changed me and how I think about the world and the future of humanity- during a time of unprecedented collective action, this book made me consider what role I wanted to play in safeguarding against existential risk. It was an oddly comforting read and an important one. Sparked many more questions than answers- highly recommend, I don’t often give five stars.

  22. 5 out of 5

    Thomas Margot

    It was a difficult book to get through. Not because the topic is difficult: in fact, it's relatively simple to get the message. There are all kinds of possible disasters, natural disasters, man-made disasters, black swan disasters, the list goes on, and the central question is: what can "we" do to not get fucked? "We", also known as "humanity". The scope of the book is big. The author is not talking about what could fix my hay fever, but what could screw humanity over in the coming centuries and It was a difficult book to get through. Not because the topic is difficult: in fact, it's relatively simple to get the message. There are all kinds of possible disasters, natural disasters, man-made disasters, black swan disasters, the list goes on, and the central question is: what can "we" do to not get fucked? "We", also known as "humanity". The scope of the book is big. The author is not talking about what could fix my hay fever, but what could screw humanity over in the coming centuries and millennia. The reason why this is important has to do with our "potential": what can humanity achieve in these massive periods of time that still remains for our species if we don't fuck it up? That's why the title of the book is about a "precipice". The author sees humanity as currently being in our puberty. We have some technological advancements, enough to ruin ourselves, and we don't yet fully know how to deal with them. We are apes with very dangerous sticks. In the coming centuries, according to the author, we will learn how to deal with these powers and navigate the risks, or we will fall off the narrow road of life into the "precipice" (the cliff), creating our own undoing. Worst case, there is an extinction event where we manage to completely ruin humanity's future potential. Best case, we get through puberty unscathed and become responsible adults. Should humanity ever reach the stage of responsible adultness, our potentional is unlimited: on a sufficiently large timescale, the galaxy will be ours for the taking, and humanity might become an immortal species, spread out along the planets. But to do that, we have to survive our puberty first. The author takes his time and discusses all currently known potential risks in detail. This made it quite a depressing book to read. I had to put it down multiple times because it just made me sad, all the things that can go wrong. The word "risk" was repeated again and again and again until I could read the word no more. But I suppose in a book like this, it's unavoidable. I was going to give the book 3 stars, because the question posed is interesting and it's good to be kept up-to-date with the things than can fuck us. However I also felt it became quite repetitive after a while - you kind of "get the message" relatively early in the book. The book also counts 467 pages, but only 240 of that are the actual "book". The rest is appendixes and notes. Good for further reading, but I had enough. I bumped it up to 4 stars however because in the last chapter, the author sketches a future for humanity that could become reality if we don't screw ourselves over in puberty, and it inspired me. After all, whilst reading the book I thought a couple of times "All right Toby, but if we fuck up and humanity dies, is it really that bad anyway? What is the inherent value of humanity? Why should I care?" and in the last chapter the author managed to explain to me why the survival of humanity is actually important, not just for us, but for all living things. And that must not have been an easy task. So my final words are: Go humanity! Go life!

  23. 4 out of 5

    Talbot Hook

    You know, people often say things like: "this is an important book", and, half the time, it's not really an important book. But this is an important book. Objectively, even, because anything you think worthwhile in life depends on someone being around to experience it. If there are no humans, then what are Plato's Republic, Morrison's Song of Solomon, or Joyce's Ulysses? How can we strive for a more just society, more ravishing art, or even larger hi-def TVs if no one exists? All of these things You know, people often say things like: "this is an important book", and, half the time, it's not really an important book. But this is an important book. Objectively, even, because anything you think worthwhile in life depends on someone being around to experience it. If there are no humans, then what are Plato's Republic, Morrison's Song of Solomon, or Joyce's Ulysses? How can we strive for a more just society, more ravishing art, or even larger hi-def TVs if no one exists? All of these things that we care about are necessarily predicated on us being alive to enjoy and pursue them. There's no getting around it. So, as that is the subject of this book, you should probably take it rather seriously. The basic premise is this: humanity stands upon the Precipice, gazing out at the landscape of possible futures. How will we get there? Are we even capable of getting there? Well, that depends. In Ord's parsing of the issue, we have a few centuries to get our act together before we as a species bite the dust. Why is this time period so critical, as opposed to two thousand, or even two hundred, years ago? Well, in short, we didn't used to be able to destroy ourselves, and now we can. Humanity has harnessed so much destructive power that, if we're not exquisitely careful, we might end up being our own undoing. Ord also reminds us that as our power has grown, we haven't necessarily seen a commensurate increase in wisdom or prudence. In fine: we are angst-ridden adolescents with nuclear armaments. And this is not the only cause for alarm. Ord begins his cavalcade of calamities discussing natural existential risks like asteroids, supervolcanic eruptions, and stellar explosions. He then goes on to discuss anthropogenic risks like climate change and environmental destruction. Finally, discussing future risks, the book details how engineered pandemics, unaligned artificial intelligence, and dystopian society "lock in" could spell our doom. The future sounds fairly bleak, no? But that isn't necessarily true. The risk of most of these things happening is quite small, at least individually. But the human risk-portfolio is wide, and the concomitant risk percentages run higher. And many of the more-deadly risks are in front of us: nuclear winter, pandemic, or the runaway greenhouse effect. Without a great deal of thought on these issues, we could be in a very sorry state. Because Ord believes that humanity's potential is universally priceless, the safeguarding of that potential is among our more pressing moral issues. Unless we start to look ahead and plan for our planetary future, we are apt to be taken by surprise. It is absolutely critical that we begin to change our institutions, create new ones, foster international dialogue and research (and fund it), and bring people on board the common project of a truly-united humanity. If we make it past the Precipice, the vastness of the universe awaits us. The 10,000 years of civilization that we've experienced thus far could be the blink of a baby's eye in terms of how long humanity could persist -- and in terms of what we can create, experience, and become.

  24. 5 out of 5

    Andrei Khrapavitski

    The topic of existential risks is not foreign to moral philosophy. You can find articles and notable inclusions of the subject in books by Nick Bostrom, William Macaskill, Peter Singer and others. One of the most famous thought experiments related to this topic was formulated by Derek Parfit in one of my all-time favorite philosophy books Reasons and Persons: Consider three outcomes: 1) Peace 2) A nuclear war that kills 99% of the world’s existing population. 3) A nuclear war that kills 100%. In Parf The topic of existential risks is not foreign to moral philosophy. You can find articles and notable inclusions of the subject in books by Nick Bostrom, William Macaskill, Peter Singer and others. One of the most famous thought experiments related to this topic was formulated by Derek Parfit in one of my all-time favorite philosophy books Reasons and Persons: Consider three outcomes: 1) Peace 2) A nuclear war that kills 99% of the world’s existing population. 3) A nuclear war that kills 100%. In Parfit’s view, Outcome # 3 is the worst, and # 1 is the best. The interesting part concerns the relative differences, in terms of badness, between the three outcomes. According to Parfit, the difference between Outcome #2 and Outcome #3 is greater than the difference between Outcome #1 and Outcome #2, because of the unique badness of extinction. To many people this may be counter-intuitive. And it depends greatly whether you are an optimist or a pessimist. Toby Ord seems to belong to a group of philosophical optimists. In his view, human history is just beginning. Why does he think that? Humanity is about two hundred thousand years old. For what we know, the Earth will remain habitable for hundreds of millions more—enough time for millions of future generations; enough to end disease, poverty and injustice forever; enough to create heights of flourishing unimaginable today. So if we screw up and go extinct, we will deprive all those potential future people (and maybe posthuman beings) of presumably happy lives. This premise is seen in works of other contemporary philosophers, especially of the utilitarian kind, and some theoretical physicists but disputed by pessimists like David Benatar. The Precipice has three parts: The Stakes, The Risks, and The Path Forward. It also has really detailed appendices where some topics are expanded and argued further. In Part I, Ord warns that we are standing at the precipice. Our rapidly accelerating technological power has reached the threshold where we might be able to destroy ourselves. We are at the point in our evolution when the threat to humanity from within exceeds the threats from the natural world. He also introduces the reader to existential risks, what they are and why we should care. An existential risk is the one that either causes extinction of humanity or triggers a global collapse of civilization, reducing humanity to a pre-agricultural state. What is striking is that this field (is it even a field?) is greatly neglected. Consider the possibility of engineered pandemics, which is one of the largest risks facing humanity. The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million—less than the average McDonald’s restaurant, remarks Ord. In Part II, he goes into detail as to what kind of existential risks humanity is facing. He also provides estimates how likely this or that risk is to occur within the next century. He splits these risks into the following groups: natural (asteroids & comets, supervolcanic eruptions, stellar explosions, etc.), anthropogenic (nuclear weapons, climate change, environmental damage) and future risks (pandemics, unaligned artificial intelligence, etc.). Some are very unlikely to happen. Others are much more worrying. But the bottomline is that all these risks are largely ignored. The current COVID-19 crisis is a vivid example how unprepared we are to deal with a situation that endangers lives of so many people around the world. No, this coronavirus pandemic is nothing like the themes described in this book, but a much more deadly pandemic, either engineered or natural, is not an unimaginable scenario. In fact, it is among risks with higher probability to happen within the next 100 years. How high is the probability? ∼ 1 in 30, according to Ord. Contrast it with asteroid or comet impact: ∼ 1 in 1,000,000. So yeah, we should probably pay much more attention to pandemic prevention than we currently do. Therefore it is unthinkable stupidity to cut WHO funding at this point in time. Will we learn our lessons from the current crisis? Will we pay more attention and fund efforts to prevent catastrophic risks? In Part III, Ord speculates why we should. He says our strategy should be: 1. Reaching Existential Security 2. The Long Reflection 3. Achieving Our Potential According to Ord, while each of our lives may be tossed about by external forces—a sudden illness, or outbreak of war—humanity’s future is almost entirely within humanity’s control. Most existential risk comes from human action: from activities which we can choose to stop, or to govern effectively. Even the risks from nature come on sufficiently protracted timescales that we can protect ourselves long before the storm breaks. If you are an optimist, this philosophical view makes a lot of sense. Even if you’re a pessimist like Benatar, you would probably agree that it is out of your control to persuade billions of people not to procreate so as not to multiply suffering in this world and let humanity go into extinction by a conscious choice or some act of philosophical volition, assuming you believe in free will. So given that new lives are being constantly created anyway, it would be wise to make more efforts to ensure that these new and future generations, at least, have a chance to enjoy the time they are given and realize this grand, even if somewhat speculative, potential.

  25. 4 out of 5

    Joeri

    This book convincingly makes the case that we are living in an unique time: one where we can help reduce our risk of extinction, or one where we can exacerbate it. It is especially the former that the author focuses on, and he optimistically argues that the existential risks that we have created can at the same time be reduced by us, if only we prioritize safeguarding our future more. He believes a future where we shall flourish and progress is feasible and realistic, and in arguing for it, he c This book convincingly makes the case that we are living in an unique time: one where we can help reduce our risk of extinction, or one where we can exacerbate it. It is especially the former that the author focuses on, and he optimistically argues that the existential risks that we have created can at the same time be reduced by us, if only we prioritize safeguarding our future more. He believes a future where we shall flourish and progress is feasible and realistic, and in arguing for it, he claims that there is strong reason to believe that our future can be exceptionally bright. I agree to the fullest extent with the author that we should help reduce any risk that might cause our extinction, or collapse and think that indeed great steps need to be taken to alleviate or help prevent (future) suffering. Using science and evidence from multiple disciplines, Ord manages to show which existential risks are most likely to occur, and which of those threaten our existence most. During the reading of the book, I also ran into some doubts and issues. Firstly, when it comes to his use of science and data, I often found I lack the numerical literacy to coherently follow all of his probabilistic reasonings and mathematical calculations. If this where the case with more readers, it threatens the book's intelligibility for others like me. Secondly I wonder if the existential risk posed by climate change is indeed as small as the author argues. Thirdly I wonder if we indeed have reason to believe that our future will be very good. Another doubt I have is that I'm unsure if all risks created by humanity also lies within its control. I sometimes fear that we have set processes and developments in motion that perhaps already lie outside our sphere of control, hence I'm skeptical about man's ability to control everything it initiates or develops. This is made worse by the fact that there are not only unforeseen but also unforeseeable consequences of our technological developments. How can we ever oversee the (potential) consequences of everything we do? Apart from that, I think the case for protecting our future compelling, and it would be selfish to not take future generations into account when we think, feel and act.

  26. 5 out of 5

    Sebastian

    Bostrom and Toby Ord are colleagues at Oxford's Future of Humanity Institute, and it's no coincidence that The Precipice sits right next to Superintelligence in my Mental Map of Books. I consider this to be a spiritual successor of sorts to Bostrom's work. We begin with the premise that there are all sorts of existential threats to humanity. Ord does a phenomenal job of describing a taxonomy and even describing the probability of each basket of risks resulting in humanity's extinction in the next Bostrom and Toby Ord are colleagues at Oxford's Future of Humanity Institute, and it's no coincidence that The Precipice sits right next to Superintelligence in my Mental Map of Books. I consider this to be a spiritual successor of sorts to Bostrom's work. We begin with the premise that there are all sorts of existential threats to humanity. Ord does a phenomenal job of describing a taxonomy and even describing the probability of each basket of risks resulting in humanity's extinction in the next century. My takeaway here is that the most salient risks that require our attention are anthropogenic. If Making The Atomic Bomb wasn't enough, I am increasingly worried about nuclear terrorism and bio-weapons. Especially the latter. Ord keeps talking about how we monitor facilities synthesizing DNA to ensure nobody is making superbugs. News update: it wouldn't take a genius to totally avoid these big, monitored synthesis cores in the first place or to engineer something virulent and deadly WITHOUT synthesizing much DNA at all, or to federate out small orders to a bunch of different facilities. He goes on to talk about what we as humanity should do about all these risks, which I like and don't like. I like because there is actually cogent thinking here and he tries to solve the problem. I don't like it because a lot of the discussion is premised on humanity agreeing on an objective function for the species ("The Great Reflection"). What? That seems about as likely as all the atoms on planet Earth forming one giant molecule. A better discussion here would begin with the assumption that humanity will likely always have dissenters, some of them strong. Clever people who dissent strongly from consensus are probably going to mess up the Earth. If you believe humanity's survival as a species in some form is a high priority, then I think we should invest far more sooner to hedge our bets: time to get off the rock. This is a very well-written and well-researched book. Major kudos to Ord for that.

  27. 4 out of 5

    Arturs Kanepajs

    A great overview and summary about the topic. The last chapter perhaps was deemed to be inspiring, but seemed a bit naïve and very anthropocentric. If we don't destroy ourselves, given the speed of technological progress, seems very unlikely that humans will be willing to confine themselves to the present physical and mental forms; and adhere to the eclectic and inconsistent moral codes that we try to employ now. Some (e.g. Voluntary Human Extinction Movement) might think extinction is not all ba A great overview and summary about the topic. The last chapter perhaps was deemed to be inspiring, but seemed a bit naïve and very anthropocentric. If we don't destroy ourselves, given the speed of technological progress, seems very unlikely that humans will be willing to confine themselves to the present physical and mental forms; and adhere to the eclectic and inconsistent moral codes that we try to employ now. Some (e.g. Voluntary Human Extinction Movement) might think extinction is not all bad. This book reaches out to the current moral uncertainty: i.e. maybe in time humans will understand better whether extinction is good or bad, and act accordingly. So we better not die now. Also, almost everyone can probably agree that S-risks (suffering risks) are bad; and these often have the same factors as extinction risk. Not that anyone needs reminders, but (to quote Edward O. Wilson) we have stone age emotions, medieval institutions and god-like technology. With this equipment, I also hope we manage to muddle through somehow w/o getting unlucky in the Russian roulette that is this century.

  28. 5 out of 5

    Madeline Zimmerman

    An excellent introduction to the fledgling EA cause area of existential risk, although the appendices should have been included in the main text, as they contain the mathematical proofs and philosophical underpinnings core to truly understanding Ord's argument on why and how we should care about future generations. The Precipice just scratches the surface on existential risk research, but this summary of the existing literature is a helpful starting point for anyone looking to read more about a An excellent introduction to the fledgling EA cause area of existential risk, although the appendices should have been included in the main text, as they contain the mathematical proofs and philosophical underpinnings core to truly understanding Ord's argument on why and how we should care about future generations. The Precipice just scratches the surface on existential risk research, but this summary of the existing literature is a helpful starting point for anyone looking to read more about a particular risk or better understand how they can allocate their time towards mitigation.   This book is worth reading just to see if you believe Ord's thesis: Over the next 100 years, there is a 1 in 6 chance humanity goes extinct. If you do find his argument credible, it really calls into question how we ("we" being the world) can correct for the fact that we've allocated a pathetically small number of resources towards understanding, let alone improving, these odds. 

  29. 4 out of 5

    Emma McHugh

    I loved the energy with which the author envisages humanity's long-term potential. I had been skeptical of 'tin-foil' types who believe unaligned AI might take over the world, but this book explains that even very low probability existential risks should be taken extremely seriously because the stakes are so high. Even though the scope of this book is so ambitious, it isn't overly dramatic and appropriately acknowledges uncertainty. It's also very accessible. I got so much out of reading The Pre I loved the energy with which the author envisages humanity's long-term potential. I had been skeptical of 'tin-foil' types who believe unaligned AI might take over the world, but this book explains that even very low probability existential risks should be taken extremely seriously because the stakes are so high. Even though the scope of this book is so ambitious, it isn't overly dramatic and appropriately acknowledges uncertainty. It's also very accessible. I got so much out of reading The Precipice and highly recommend it.

  30. 5 out of 5

    Willem

    Interesting plea to protect humanity (in the broadest sense, not restricted to Homo sapiens) from the 5 biggest risks to the 'flourishing' of moral beings: nuclear war, climate change, other environmental damage, engineered pandemics and unaligned AI. Though in minimizing suffering I still feel we have more duties to actual people than to possible future people.

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.