Our final century? #
“So if we drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in a multitude of ways. We would fail to achieve the dreams they hoped for; we would betray the trust they placed in us, their heirs; and we would fail in any duty we had to pay forward the work they did for us. To neglect existential risk might thus be to wrong not only the people of the future, but the people of the past.”
- Toby Ord
Humanity appears to face existential risks: a chance that we’ll destroy our long-term potential. We’ll examine why existential risks might be a moral priority, and explore why they are so neglected by society. We’ll also look into one of the major risks that we might face: a human-made pandemic, worse than COVID-19.
Alongside this, we’ll introduce you to the concept of “expected value” and explore whether you could lose all of your impact by missing one crucial consideration.
Key concepts from this session include:
- Expected value: We’re often uncertain about how much something will help. In such circumstances, it may make sense to weigh each of the outcomes by the likelihood that they occur and pick the action that looks best in expectation.
- Crucial considerations: It can be extremely hard to figure out whether some action helps your goal or causes harm, particularly if you’re trying to influence complex social systems or the long term. This is part of why it can make sense to do a lot of analysis of the interventions you’re considering.
This work is licensed under a Creative Commons Attribution 4.0 International License .
The Precipice (To read: Chapter 2) #
This is a linkpost for https://play.google.com/store/books/details?id=W7rEDwAAQBAJ
The case for reducing existential risk #
This is a linkpost for https://80000hours.org/articles/existential-risks/
In 1939, Einstein wrote to Roosevelt:1
It may be possible to set up a nuclear chain reaction in a large mass of uranium…and it is conceivable — though much less certain — that extremely powerful bombs of a new type may thus be constructed.
Just a few years later, these bombs were created. In little more than a decade, enough had been produced that, for the first time in history, a handful of decision-makers could destroy civilisation.
Humanity had entered a new age, where we faced not only existential risks2 from our natural environment, but also those of our own creation.
In this new age, what should be our biggest priority as a civilisation? Improving technology? Helping the poor? Changing the political system?
Here’s a suggestion that’s not so often discussed: our first priority should be to survive.
So long as civilisation continues to exist, we’ll have the chance to solve all our other problems, and have a far better future. But if we go extinct, that’s it.
Why isn’t this priority more discussed? Here’s one reason: many people don’t yet appreciate the change in situation, and so don’t think our future is at risk.
Social science researcher Spencer Greenberg surveyed Americans on their estimate of the chances of human extinction within 50 years. The results found that many think the chances are extremely low, with over 30% guessing they’re under one in ten million.3
We used to think the risks were extremely low as well, but when we looked into it, we changed our minds. As we’ll see, researchers who study these issues think the risks are over one thousand times higher, and are probably increasing.
These concerns have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge , MIT , Oxford , and elsewhere.
In the rest of this article, we cover the greatest risks to civilisation, including some that might be bigger than nuclear war and climate change. We then make the case that reducing these risks could be the most important thing you do with your life, and explain exactly what you can do to help. If you would like to use your career to work on these issues, we can also give one-on-one support .
Continue reading on 80,000 Hours’ website #
This work is licensed under a Creative Commons Attribution 4.0 International License.
Why experts are terrified of a human-made pandemic — and what we can do to stop it #
This is a linkpost for https://www.vox.com/22937531/virus-lab-safety-pandemic-prevention
Concrete Biosecurity Projects (some of which could be big) #
Andrew Snyder-Beattie and Ethan Alley
This is a list of longtermist biosecurity projects. We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in relative4 terms). While we are confident there is important work to be done within each of these areas, our confidence in specific pathways varies widely and the particulars of each idea have not been investigated thoroughly.
Still, we see these areas as critical parts of biosecurity infrastructure, and would like to see progress building them out. If you’d like to be kept up to date about opportunities to get involved, please fill out this Google Form .
Early Detection Center #
Early detection of a biothreat increases the amount of time we have to respond (e.g. designing tailored countermeasures, using protective equipment, heading to bunkers, etc). The current approach for early warning of novel pathogens is severely lacking—it typically relies on a particularly astute doctor realizing that something is strange combined with negative tests for everything else. Existing systems are also almost exclusively focused on known pathogens, and we could do a lot better using pathogen-agnostic systems that can pick up unknown pathogens.
One concrete goal would be something simple where a small team of people collects samples from volunteer travelers around the world and then does a full metagenomic scan for anything that could be dangerous.5 Even collecting and analyzing only 100 random samples per day could make a big difference in some scenarios, since that would mean we would still expect to catch things before they infect too large a fraction of the global population. We think that with the right team, this could be done with close-to-existing technology for less than $50 million per year.6
There are a handful of bottlenecks and a number of ways to decompose this problem. To get started on subproblems, one of us (Ethan) is working on a list of suggestions, which we will backlink here.
Super PPE #
Most personal protective equipment (PPE) is not good enough. Things like masks and suits require training to fit properly, lack reusability, and are generally designed for routine uses rather than for the most extreme events. The small minority of PPE that is designed for extreme use cases (e.g. BSL4 suits or military-grade PPE) is bulky, highly restrictive, and insufficiently abundant—not the kind of thing you could easily put millions of healthcare/pharma/essential workers into if needed. It seems plausible that with good materials science and product design we could come up with next-generation PPE that is simultaneously highly effective in extreme cases, easy to use, reliable over long periods of time, and cheap/abundant.
One concrete commercial goal would be to produce a suit (and accompanying system) that is designed for severely immunocompromised people to lead relatively normal lives, at a cost low enough to convince the US government to acquire 100 million units for the Strategic National Stockpile.7 Another goal would be for the suit to simultaneously meet military-grade specifications, e.g. protecting against a direct hit of anthrax.
PPE has the advantage of being truly ‘pathogen-agnostic’—we can stockpile it in advance of knowing what the threat is, in contrast to vaccines or many medical countermeasures. It is also ‘defensively stable’ in that physical barriers can’t be easily bypassed using pathogen engineering techniques (whereas many medical countermeasures might be defeated with some creative tinkering). See Carl Shulman’s post here for more on this.
To get started on subproblems within PPE, one of us (Ethan) will publish a PPE deeper dive at some point in the future (backlink forthcoming).
Medical countermeasures #
Medical countermeasures (e.g. vaccines, antivirals, monoclonal antibodies) for use against catastrophic biorisks have a number of existing drawbacks. In most cases they are tailored to existing pathogens (e.g. smallpox vaccines) and wouldn’t help against a novel threat. Many countermeasures are also not robust against deliberate engineering (e.g. antibiotics are broad-spectrum but can be overcome ).
We think there could eventually be opportunities for radically improved medical countermeasures against GCBR-class threats, either by 1) producing targeted countermeasures against particularly concerning threats (or broad-spectrum countermeasures against a class of threats), or by 2) creating rapid response platforms that are reliable even against deliberate adversaries.
However, we are not yet ready to recommend medical countermeasures as a general focus area for large scale projects, in part because many projects in this space have inadvertent downside risks (for example, platforms that use viral vectors may accelerate viral engineering technology). If you feel excited about working in this area, fill out the Google Form ( here ) and we might be able to provide you some more tailored advice.
BWC Strengthening #
Right now, the biological weapons convention ( BWC )—the international treaty that bans biological weapons—is staffed by just four people and lacks any form of verification . We think there is more scope for creative ways of strengthening the treaty (e.g. whistleblowing prizes), or creating new bilateral agreements and avoiding bureaucratic gridlock. Moreover, a team of people scouring open sources (i.e. publication records, job specs, equipment supply chains) could potentially make it difficult for a lab to get away with doing bad research, and thereby strengthen the treaty.
Sterilization technology #
Sterilization techniques that rely on physical principles (e.g. ionizing radiation) or broadly antiseptic properties (e.g., hydrogen peroxide, bleach) rather than molecular details (e.g. gram-negative antibiotics) have the advantage of being broadly applicable, difficult to engineer around, and having little dual-use downside potential.
Existing technologies for physical sterilization (e.g. UV light , materials science for antimicrobial surfaces, etc.) have different limitations in terms of costs, convenience, and practicality, and we think this is an underexplored area for prevention and countermeasure development. We have a lot of remaining uncertainties in this area but think the value of investigating it is high.
Refuges #
Existing bunkers provide a fair amount of protection, but we think there could be room for specially designed refuges that safeguard against catastrophic pandemics (e.g. cycling teams of people in and out with extensive pathogen agnostic testing, adding a ‘civilization-reboot package’, and possibly even having the capability to develop and deploy biological countermeasures from the protected space). This way, some portion of the human population is always effectively in preemptive quarantine.
Another way of framing this: lots of people think we’d substantially reduce biorisk if we had a self-sustaining settlement on Mars (and we basically agree). If that’s the case, it would be a lot cheaper to put the exact same infrastructure on Earth, and it buys almost the same amount of protection.
One next step on this would be to build an org which specializes in the operations, logistics and contractor relationships needed to actually build a refuge with the necessary amenities (based on a shallow investigation, one of us, ASB, ballparked outsourcing at around $100-300M / bunker but did not have logistics expertise or time to dig deeper). We have some more ideas in the works we’ll backlink here later, but please fill out the form if you’re interested in the meantime.
Conclusions #
A few things we want to highlight:
- Collectively, these projects can potentially absorb a lot of aligned engineering and executive talent, and a lot of money. Executive talent might be the biggest constraint, as it’s needed for effective deployment of other talent and resources.
- Many of the most promising interventions are not bottlenecked by technical expertise in biology or bioengineering. Technically minded EAs who want to work in bio should consider training in other areas of engineering , and in general look to build generalist engineering and problem-solving skills rather than focusing only on getting biology knowledge.
- These projects have reasonably good feedback loops (at least compared to most longtermist interventions), making this area a promising proving ground for meta-EA interventions, especially around entrepreneurship.
Despite how promising and scalable we think some biosecurity interventions are, we don’t necessarily think that biosecurity should grow to be a substantially larger fraction of longtermist effort than it is currently. From a purely longtermist perspective we think that AI might be between 10-100x more important than biosecurity, even if solving biosecurity might be more tractable than solving AI (possibly by a large factor). Biosecurity is also attractive as a cause area for non-longtermist reasons, given the importance of preventing lesser catastrophes that fall short of truly civilization-ending but are still horrific (e.g., 10-100x COVID)—we thus think it could be even more relatively appealing for those more focused on impacts on the current generation(s).
Again, please fill in this coordination form to stay informed of developments and opportunities.
We thank Chris Bakerlee, Jamie Balsillie, Kevin Esvelt, Kyle Fish, Cate Hall, Holden Karnofsky, Grigory Khimulya, Mike Levine, and Carl Shulman for feedback on this post.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Expected Value #
This is a linkpost for https://non-trivial.org/lessons/making-decisions-under-uncertainty
Donating like a startup investor: Hits-based giving, explained #
This is a linkpost for https://www.givingwhatwecan.org/blog/donating-like-a-startup-investor-hits-based-giving-explained
Exercise for ‘Our Final Century?’ #
This session’s exercise is about doing some personal reflection. There are no right or wrong answers here, instead, this is an opportunity for you to take some time and think about your ethical values and beliefs.
What does it mean to be a good ancestor? (10 mins.)
In the exercise from last week, we asked you to write a letter to the past. This week, we’d like you to turn your focus forward, and think about the years, decades and centuries ahead of us.
When we think about doing good - such as by preventing malaria or restoring vision - we often already consider the future effects of our actions. We care about the fact that suffering is not just alleviated in the very moment we administer a vaccine or deliver some medication, but that recipients enjoy the benefits over the next days, weeks, months and hopefully years of their life. Similarly, it seems intuitive that parents have a moral responsibility for their children, and that safeguarding their well-being is a key priority, especially while they can’t take care of themselves. But what about our grandchildren and great-grandchildren? Or the generation after that?
Whether or not you think that your personal responsibility stops at a certain point (there are legitimate reasons why it might), embedded in this idea is the concept of being “a good ancestor”. Entire books have been written about the notion that a key priority is to “create a better tomorrow” for those who follow in our footsteps. But what does that mean in practice? How can we start thinking about what makes for a “good ancestor”? In this exercise, we ask you to collect your thoughts on this question.
You may start by describing character traits or attributes of a “good ancestor”, or by outlining actions they would or wouldn’t take.
E.g. A good ancestor…
Crucial consideration #
This is a linkpost for https://forum.effectivealtruism.org/topics/crucial-consideration
More to explore on ‘Our Final Century’ #
Existential risks #
- Policy and research ideas to reduce existential risk - 80,000 Hours __ (5 mins.)
- The Vulnerable World Hypothesis - Future of Humanity Institute - Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default. (45 mins.)
Criticism #
- Democratizing Risk: In Search of a Methodology to Study Existential Risk (50 mins.)
- A critical review of “The Precipice” (maybe come back to the “Unaligned Artificial Intelligence” section next week, once you’ve engaged with the argument for AI risk).
Biosecurity #
- “Future risks” chapter of The Precipice , introduction and “Pandemics” section. Stop at “Unaligned artificial intelligence” (Book - 25 mins.)
- Biosecurity needs engineers and material scientists (4 mins.)
- Reducing Global Catastrophic Biological Risks Problem Profile - 80,000 Hours (1 hour)
- The Apollo Report
- Global Catastrophic Risks Chapter 20 - Biotechnology and Biosecurity Overview from ~15 years ago on how biotechnological power is increasing exponentially, as measured by the time needed to synthesize a certain sequence of DNA. This has important implications for biosecurity. (1 hour)
- Open until dangerous: the case for reforming research to reduce global catastrophic risk (Video - 50 mins.)
- Dr. Cassidy Nelson on the twelve best ways to stop the next pandemic (and limit COVID-19) 80k podcast interview (podcast - 2.5 hours)
- Andy Weber on rendering bioweapons obsolete and ending the new nuclear arms race 80k podcast interview (podcast - 2 hours)
- Article on information hazards in biotechnology (15 mins.)
- Using Export Controls to Reduce Biorisk (6 mins.)
- Lynn Klotz on improving the Biological and Toxin Weapons Convention (BWC) (10 mins.)
- Horsepox synthesis: A case of the unilateralist’s curse? (8 mins.)
Climate Change #
- Climate Change Problem Profile - 80,000 Hours - An analysis of the worst risks of climate change, some of the most promising ways to reduce them, and how to prioritize climate change against other problems. (30 mins.)
- Effective Environmentalism (Website)
Nuclear security #
- Daniel Ellsberg on the creation of nuclear doomsday machines - Daniel Ellsberg on the institutional insanity that maintains large nuclear arsenals, and a practical plan for dismantling them (Podcast - 2 hours 45 mins.)
- List of nuclear close calls - Wikipedia - A description of the thirteen events in human history so far that could have led to an unintended nuclear detonation (5 mins.)
- Risks from Nuclear weapons - A series of posts exploring the extent to which nuclear risk reduction is a top priority, and the most effective ways to reduce nuclear risk.
- Nuclear security - Brief summary + relevant articles on the EA Forum
Global governance and international peace #
- Ambassador Bonnie Jenkins on 8 years of combating WMD terrorism - an interview with Bonnie Jenkins, Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. (Podcast - 1 hour 40 mins.)
- Modeling Great Power conflict as an existential risk factor (40 mins.)
- Why effective altruists should care about global governance - Because global catastrophic risks transcend national borders, we need new global solutions that our current systems of global governance struggle to deliver. (Video - 20 mins.)
- Destined for War: Can America and China Escape Thucydides’s Trap (Book)
-
“In the course of the last four months it has been made probable — through the work of Joliot in France as well as Fermi and Szilárd in America — that it may become possible to set up a nuclear chain reaction in a large mass of uranium, by which vast amounts of power and large quantities of new radium- like elements would be generated. Now it appears almost certain that this could be achieved in the immediate future. “This new phenomenon would also lead to the construction of bombs, and it is conceivable — though much less certain — that extremely powerful bombs of a new type may thus be constructed. A single bomb of this type, carried by boat and exploded in a port, might very well destroy the whole port together with some of the surrounding territory. However, such bombs might very well prove to be too heavy for transportation by air.”
Einstein–Szilárd letter, Wikipedia, Archived link , retrieved 17 October 2017. ↩︎
-
Nick Bostrom defines an existential risk as event that “could cause human extinction or permanently and drastically curtail humanity’s potential”. An existential risk is distinct from a global catastrophic risk (GCR) in its scope – a GCR is catastrophic at a global scale, but retains the possibility for recovery. An existential threat seems to be used as a linguistic modifier of a threat to make it appear more dire. ↩︎
-
Greenberg surveyed users of Mechanical Turk, who tend to be 20-40 and more educated than average, so the survey doesn’t represent the views of all Americans. See more detail in this video: Social Science as Lens on Effective Charity: results from four new studies – Spencer Greenberg .
The initial survey found a median estimate of the chance of extinction within 50 years of 1 in 10 million. Greenberg did three replication studies and these gave higher estimates of the chances. The highest found a median of 1 in 100 over 50 years. However, even in this case, 39% of respondents still guessed that the chances were under 1 in 10,000 (about the same as the chance of a 1km asteroid strike). In all cases, over 30% thought the chances were under 1 in 10 million. You can see a summary of all the surveys here .
Note that when we asked people about the chances of extinction with no timeframe, the estimates were much higher. One survey gave a median of 75%. This makes sense — humanity will eventually go extinct. This helps to explain the discrepancy with some other surveys. For instance, “Climate Change in the American Mind” (May 2017, archived link ), found that the median American thought the chance of extinction from climate change is around 1 in 3. This survey, however, didn’t ask about a specific timeframe. When Greenberg tried to replicate the result with the same question, he found a similar figure. But when Greenberg asked about the chance of extinction from climate change in the next 50 years, the median dropped to only 1%. Many other studies also don’t correctly sample low probability estimates — people won’t typically answer 0.00001% unless presented with the option explicitly.
However, as you can see, these types of surveys tend to give very unstable results. Answers seem to vary on exactly how the question is asked and on context. In part, this is because people are very bad at making estimates of tiny probabilities. This makes it hard to give a narrow estimate of what the population in general thinks, but none of what we’ve discovered refutes the idea that a significant number of people (say over 25%) think the chances of extinction in the short term are very, very low, and probably lower than the risk of an asteroid strike alone. Moreover, the instability of the estimates doesn’t seem like reason for confidence that humanity is rationally handling these risks. ↩︎
-
E.g. if biorisk was 1% in the next century, each of these interventions would cut the absolute risk of catastrophe by at least 0.01% ↩︎
-
This version of a ‘sentinel system’ is going to be neglected by traditional public health authorities and governments because they won’t be searching for engineered threats designed to elude pathogen-specific detection tools. ↩︎
-
Another discussion of this idea, a ‘nucleic acid observatory,’ can be found here . ↩︎
-
One possible downside risk is that adversarial countries might interpret such a huge bulk purchase of PPE as being evidence or preparation for strategic biological warfare, leading to an arms race dynamic. The company should therefore be cautious about how it messages and should also liberally sell this equipment everywhere in the world to signal defensive intent. ↩︎