Our grant program funds projects that aim to reduce existential risks from AI, focusing on four underexplored areas that could have significant impact on AGI safety and the future of humanity. We provide funding to support high-risk, high-reward initiatives that develop human capabilities, strengthen AI safety frameworks, strengthen both human capabilities and the cooperation architectures between and among humans and AI systems.
Submitted Projects
0
Total Funding
$300,000
Submission Deadline
Jun 01, 2025
We are interested in funding projects that address one or more of the following four areas:
Automating Research and Forecasting
● Scaling AI-enabled research to support safe AGI development
● Scaling efficient forecasting methods relevant for safe AGI
● Other approaches in this area
Neurotech to Integrate with or Compete Against AGI
● Brain-Computer Interfaces (BCIs) to enhance human cognition or facilitate
human-AGI collaboration
● Whole Brain Emulations (WBEs) as human-like intelligences that are interpretable and alignable
● Lo-fi emulations using behavioral and neural data with deep learning
● Other approaches in this area
Security Technologies for Securing (AI) Systems
● Implementations of computer security techniques (POLA, SeL4-inspired systems, hardened hardware)
● Automated red-teaming, vulnerability discovery
● Cryptographic techniques for trustworthy coordination architectures
● Other concrete approaches in this area
Safe Multi-Agent Scenarios
● Game theory addressing interactions between multiple humans, AIs, or AGIs
● Avoiding collusion and deception, encouraging positive-sum dynamics
● Principal-agent problems, Active Inference agents
● Other concrete approaches in this area
We encourage proposals that integrate approaches which span across multiple focus areas.
We do not fund projects outside the four specified areas of AI safety outlined above. Common exclusions include:
● General AI alignment research
● Application-based AI technology proposals (e.g., AI for healthcare, medical research, education, or business tools)
● Projects in non-AI safety fields
If you have questions please contact grants@foresight.org
Peer Review: All proposals will first undergo an initial peer review process to assess whether the application is within scope and meets the basic eligibility requirements. This ensures that the project aligns with the four focus areas of our grant program.
Technical Advisors & Screening Interview: If your proposal passes the peer review, it will be sent to at least three technical advisors for further evaluation and insights. We will also invite you for a short, 15-minute screening interview to discuss your project. Both the review by the advisors and the interview will happen concurrently.
Technical Discussions: Any technical questions or clarifications will be
addressed via email during the review process.