AI safety

Advances in AI will carry serious risks. We aim to do our research safely, produce technologies that can be deployed safely, and influence others to do the same.


Advances in AI have the potential to bring huge benefits, and as a nonprofit we are committed to ensuring that such benefits are shared by all. However we also take the risks very seriously, and as such members of our team have been contributing to AI safety for years:

We continually assess the risks of harm as our research progresses, but we believe our current research has very low risk of causing harm in the short term since we are focused on producing AI that has a subset of the abilities of a non-human animal such as a rat.

Beyond just avoiding danger, we hope to contribute positively to safety. There are probably multiple possible paths to AGI, and some may be more safe than others. We may discover that a particular brain-like AI is much safer or more dangerous than some other alternative. This information could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.

Although we hope any benefits of our work accrue to all, we will carefully consider the safety implications before releasing source code or publishing results.

Join Our Team