AI safety
Advances in AI will carry serious risks. We aim to do our research safely, produce technologies that can be deployed safely, and influence others to do the same.

Advances in AI have the potential to bring huge benefits, and as a nonprofit we are committed to ensuring that such benefits are shared by all. However we also take the risks very seriously, and as such members of our team have been contributing to AI safety for years:
- Our founder Jed McCaleb has been a major donor and advocate for AI safety research since at least 2015, and he started the Obelisk project largely to try to steer AGI research in a safe direction.
- Our chief scientist Randall O’Reilly has been writing about the safety implications of brain-like AGI since at least 2016.
- Steve Byrnes has been studying AGI safety since 2019. He continues that research as part of the Obelisk team.
We continually assess the risks of harm as our research progresses, but we believe our current research has very low risk of causing harm in the short term since we are focused on producing AI that has a subset of the abilities of a non-human animal such as a rat.
Beyond just avoiding danger, we hope to contribute positively to safety. There are probably multiple possible paths to AGI, and some may be more safe than others. We may discover that a particular brain-like AI is much safer or more dangerous than some other alternative. This information could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.
Although we hope any benefits of our work accrue to all, we will carefully consider the safety implications before releasing source code or publishing results.

