Founders Pledge recently launched the new Global Catastrophic Risks Fund as the latest addition to our fund offerings. Our Funds, which serve as philanthropic co-funding vehicles that pool donors’ money for high-impact giving, now include:
- The Climate Change Fund (CCF);
- The Global Catastrophic Risks Fund (GCRF);
- The Global Health and Development Fund (GHDF); and
- The Patient Philanthropy Fund (PPF).
These Funds represent a diversity of causes and worldviews. The PPF and GCRF share the goal of mitigating large-scale risks to humanity. The two Funds differ, however, in their approach to this goal.
This blog post explains the differences — and, importantly, the complementarity — between the PPF and the GCRF.
Differences between the Funds
The first and most basic difference between the two Funds is their giving timelines. Whereas the GCRF will give on a rolling basis, the PPF is an experiment in investing to give — growing our resources to give at highly impactful times in the future.1 You can read more about this idea in our Investing to Give report.
There are a number of factors that will shape which timeline you prefer. The first factor is beliefs about future returns on investments. If you think these are likely to be especially high, then all else equal, you may prefer to give to the PPF over the GCRF — your money may have much more leverage in the future thanks to the compounding returns on investment.
The second factor is beliefs about the timelines of major risks and the near-term tractability of risk reduction measures. If you believe that most of the major threats to humanity — great power war, nuclear weapons, catastrophic biological risks, and risks from artificial intelligence (AI) — are likely near-term threats, and that we can and ought to reduce them today, you may prefer the GCRF.
The third factor is the moral weight one places on future generations relative to current generations. Longtermists generally believe that “future people matter,” and often that they matter just as much as people alive today. Others may place a heavier weight on the wellbeing of current generations. If you don’t think there’s a difference between helping someone today and helping someone 100 years from now who isn’t even alive yet, then this is not a factor in choosing between the Funds. If you have a strong preference for helping people alive today or in the near future, you may prefer the GCRF (although some of its interventions will also seek to shape the long-term risk landscape).
Another factor is the extent to which you think major global problems in the future may require enormous resources. If you think, for example, that there could be some unpredicted crisis at some point in the future, and that solving this problem will require hundreds of millions of dollars at once, then you may want to consider helping to give the world an “insurance policy” via the PPF.
This question also touches on a cluster of related ideas: the time of perils; the most important century; and the precipice. Some scholars of existential risk think that we are living in an unfortunate time — our science and technology are advanced enough that humanity can wipe itself out, but not advanced enough that we can develop effective countermeasures. If you think we are in such a time of perils, and that this time will not last long, you may prefer the GCRF. If you think the really dangerous time is further in the future, you may prefer the PPF. If, finally, you believe that we are in a time of perils, but that there may be many such times, or even that humanity is in a permanent Red Queen’s Race (“it takes all the running you can do, to keep in the same place”) with existential risks, you may want to split your allocation between the two Funds.
This is a non-exhaustive list of differences. Fundamentally, the differences come down to uncertainty — financial uncertainty, moral uncertainty, and uncertainty about how the threat landscape is changing. This is why we believe that the GCRF and the PPF are fundamentally complementary as part of a balanced portfolio approach to high-impact philanthropy.
Complementarity of the Funds
Despite these differences, we believe that the GCRF and PPF complement each other.
First, we believe that one Fund could provide important lessons for the other. The GCRF will take an active grant-making approach, with the ability to seed new organizations, respond to dynamic opportunities, and collaborate with partner organizations. This will not only allow the GCRF to move quickly, but will also provide the PPF with leverage to intervene during especially crucial points in humanity’s future.
Consider how the two Funds would respond if a pandemic emerged with the potential to cause a global catastrophe. Such a threat would provide sufficient grounds for the GCRF to take action. At the same time, such a pandemic may also pose an existential risk to humanity, and thus may be an especially influential moment in the human story. In such circumstances, the PPF would also be motivated to act. Thanks to the GCRF’s active grant-making approach, insights from it can direct the PPF’s resources towards the most important actors in the crisis. This demonstrates how one Fund can improve the quality of grant-making from the other.
A second complement of the Funds is based on worldview diversification. As discussed above, we face many uncertainties when aiming to do the most good, such as empirical uncertainties (e.g. "Are we living in the most important century?"), methodological uncertainties (e.g. "How should we forecast transformative AI?"), and moral uncertainties (e.g. "Do future people matter morally?"), among others. Under such profound uncertainty, it may not be wise to "put all our eggs in one basket". Instead, diversifying between an "urgent longtermist" or "current generations" perspective (prioritizing threats today), and a "patient" perspective (focusing on future threats), may make sense when each worldview appears highly plausible.
Diversifying between the PPF and GCRF comes at the expense of not fully maximizing expected value. For example, if Alice believes that the GCRF has even 2% more expected impact than the PPF, then it appears logical that she should invest 100% of her resources into the GCRF. However, such a view is unlikely to be robust given the high uncertainties within each bucket. Splitting one’s portfolio may therefore make more sense2
Artificial Intelligence (AI)
In short, we think many philanthropists who care about large-scale risks threatening humanity will likely want to allocate their giving between the two Funds. To make this more concrete, consider two startup founders who are looking to do the most good with their exit proceeds - let’s call them Alex and Bo. Both of them are concerned about potential risks from artificial intelligence (AI), but differ in their key uncertainties about the issue.
Alex is concerned about both the potential for transformative AI in the near future, and the applications of AI to military systems to create autonomous weapon systems that could destabilize great power relations. Moreover, they think that both of these issues will rely on roughly the same machine learning paradigms, with highly advanced AI systems just requiring higher levels of computational power for training and deployment. Alex is broadly optimistic about current approaches to AI safety, like the approaches at the Center for Human-Compatible AI, and moreover believes that the governance of narrow AI applications — e.g. through U.S.-China confidence-building measures — will have beneficial effects for the long-term governance of AI. They are confident in many of these beliefs, but open to being wrong about key parts. Alex therefore decides to allocate around 20% of their giving towards the PPF and 80% towards the GCRF.
Bo, on the other hand, thinks that future powerful AI systems are going to look fundamentally different from current machine learning models and may require different computational hardware, and that current approaches to AI safety may be of little value in the future. Moreover, Bo thinks that progress on transformative AI could surprise humans, leaving very little time to work on making sure such systems are safe — such work may require enormous resources. Bo is also confident about continuing returns on investments and values future people just as much as those alive today. Nonetheless, they think that avoiding a war between the U.S. and China is important in the near-term for future global cooperation on emerging technologies. Bo therefore allocates most of their money (80%) towards the PPF, but gives some (20%) to the GCRF.
Meanwhile, Carol, another fictitious character, may choose to diversify between the GCRF and PPF based on their deep uncertainty about what is best. For example, they may put a 50% chance on only existing people holding moral value, and that the best way to save those lives is by reducing catastrophic risks. At the same time, Carol might put 50% confidence in the view that most existential threats to the long-term future may lie in the next centuries, and that the long-term future is of overwhelming importance. Furthermore, it may be extremely difficult to resolve the uncertainties they have between these two views, given the many crucial considerations they need to resolve before coming to a conclusion. In such a state, Carol may give 50% of their resources to the GCRF for the benefit of present-day people, and invest the remaining 50% into the PPF, to safeguard the long-term future.
In summary, we believe that both the GCRF and PPF are excellent giving options. While they differ in giving timelines and underlying philosophy, they are also complementary to one another, offering multiple opportunities to tackle humanity’s biggest challenges. More poetically: the GCRF will fight the many fires we see in the world today, while the PPF will fill up the tank to prepare for the fires of tomorrow.