Grants

OUR APPROACH

 

In keeping with our vision - to safeguard future generations - we must ensure that there is a world in which these generations can flourish. The most robust actions we can take to protect humanity's long-term potential are mitigating serious, global disasters and conducting fundamental research to understand what our highest priorities should be.

Our recommendations currently focus on catastrophic and existential risk reduction and global priorities research. This includes the safe development of advanced artificial intelligence; biosecurity (including pandemic preparedness); nuclear security and avoiding great power conflict; climate change; policy reform and improving institutional decision-making.

SELECTED GRANTS

 

01/

Study of COVID-19 in the UK Community 

If we face an even worse pandemic in the future, we need to be able to start testing much faster than we did for COVID-19. To tackle this problem, our advisee Ben Delo funded a study, led by researchers at the University of Oxford, to validate a new diagnostic tool - nanopore sequencing. Unlike more conventional testing methods, nanopore sequencing does not require prior knowledge of the pathogen, making it uniquely useful for new diseases. This will allow researchers to sequence the whole genome of pathogens and detect new emerging infectious diseases early. If widely adopted, this technology could prove crucial for containing future outbreaks.

02/

Johns Hopkins Center for Health Security 

We consider large-scale pandemics to be among the most likely threats to civilisation’s long-term progress. This grant supports the Center for Health Security’s research into new approaches to mitigate and prevent global catastrophic biological risks, in collaboration with the Future of Humanity Institute, a multidisciplinary research institute at the University of Oxford focused on the analysis of existential risks.

03/

Center for Human-Compatible AI (CHAI)

The advent of advanced artificial intelligence is likely to have a substantial, long-lasting impact on our society. Led by Professor Stuart Russell, co-author of the most widely-used textbook on AI, CHAI is one of the first academic research centres dedicated to the design of safe and reliably beneficial artificial intelligence systems. This funding will help CHAI to further their research into reliably beneficial AI and to increase the emphasis on safety in the wider AI field.

04/

Forethought Foundation for Global Priorities Research

In Ben’s giving letter, he recognises that ‘while we have some insight into what the most pressing problems are and how to approach them, we urgently need to know more’. Ben provided seed funding to launch the Forethought Foundation, which aims to promote research within philosophy and the social sciences on how best to positively influence the long-term future. The Forethought Foundation works in close collaboration with the Global Priorities Institute at the University of Oxford and offers scholarships and fellowships to students in global priorities research, as well as research grants for established scholars.

05/

Legal Priorities Research Network

This grant provided seed funding to set up a Legal Priorities Research Network (LPRN) at Harvard Law School. LPRN plans to develop a research agenda for legal research focused on longtermism and build the community of legal scholars who care about safeguarding future generations. This network could produce legal research, focused on longtermist issues, that positively influences laws and institutions with the aim of reducing existential risk. It could also encourage top legal talent to work in policy, taking a longtermist perspective into national policy-making.

CONTACT US

For enquiries, please contact natalie@effectivegiving.org.