Existential Risk Fellowships

Today. Today… at the edge of our hope, at the end of our time, we have chosen not only to believe in ourselves, but in each other. Today there is not a man nor woman in here that shall stand alone. Not today. Today we face the monsters that are at our door, and bring the fight to them. Today, we are canceling the apocalypse!

Stacker Pentecost, Pacific Rim

In recent years, Effective Altruism has increasingly put an emphasis on making sure the world doesn’t end. In the 20th century, there were two main ways this was thought possible: nuclear war and extreme climate change. In the 21st century, advances in biology and artificial intelligence have introduced two new major “existential risks”: maliciously designed pandemics and rogue AI. These risks are arguably highly neglected, given their plausibility and that they imperil the very fate of humanity.

The interdisciplinary fields of biosecurity and AI safety are rapidly growing to help address these risks. Biosecurity is well established, however it is having to adapt to the new threats posed by synthetic biology. AI safety, on the other hand, straight up didn’t exist a decade ago, because neither did impressive AI. As such, there is a lot of exciting, interesting work to be done, and a great need for young talent to enter these fields.

BlueDot Impact has compiled excellent reading lists on these subjects, and regularly holds online classes using them as curricula. Williams EA will run biosecurity and AI safety fellowships based on these reading lists this fall.

  1. Pandemic Preparedness
  2. AI Alignment
  3. AI Governance