In keeping with our vision - to safeguard future generations - we must ensure that there is a world in which these generations can flourish. The most robust actions we can take to protect humanity's long-term potential are mitigating serious, global disasters and conducting fundamental research to understand what our highest priorities should be.

Our recommendations currently focus on catastrophic and existential risk reduction and global priorities research. This includes the safe development of advanced artificial intelligence; biosecurity (including pandemic preparedness); nuclear security and avoiding great power conflict; climate change; policy reform and improving institutional decision-making.

Toby Ord, Senior Research Fellow in Philosophy at the University of Oxford and author of

The Precipice

If all goes well, humanity has a vast future ahead of it — but very little of our philanthropy takes the scale of this future seriously. That’s why I am so excited about Effective Giving. They really get it, and are finding opportunities for truly lasting impact.




Acting Faster Against Pandemics

A study led by researchers at the University of Oxford that aims to validate a new diagnostic tool that uses nanopore sequencing to detect emerging infectious diseases early. If widely adopted, it would allow medical professionals to start testing people much faster than was possible for COVID-19 and could therefore prove crucial in containing future pandemics.


Preventing Biological Catastrophes

A research project by The Johns Hopkins Center for Health Security into new approaches to mitigate and prevent global catastrophic biological risks in collaboration with the Future of Humanity Institute, a multidisciplinary research institute at the University of Oxford focused on the analysis of existential risks.


Promoting Safe And Beneficial AI

A research project by Center for Human-Compatible AI (CHAI) into provably beneficial AI and increasing the emphasis on safety in the wider AI field. Led by Professor Stuart Russell, co-author of the most widely-used textbook on AI, CHAI is one of the first academic research centres dedicated to the design of safe and reliably beneficial artificial intelligence systems.


Driving Global Priorities Research

Launching the Forethought Foundation for Global Priorities Research, which aims to promote philosophy and social science research into how best to positively influence the long term future. Working closely with the University of Oxford’s Global Priorities Institute, the Foundation offers global priorities research scholarships, fellowships and grants to students and scholars.


Building A Longtermist Legal Framework

A Legal Priorities Research Network (LPRN) at Harvard Law School focused on developing a longtermist legal research agenda and building the community of legal scholars who care about safeguarding future generations. LPRN aims to positively influence laws and institutions to reduce existential risk and build a long term perspective into national policymaking.


For enquiries, please contact