AGI at Astera
Artificial General Intelligence will be a transformative technology in human history, one that can improve all other areas of human investigation and endeavor. At Astera, we believe AGI can bring good to humanity and that multiple approaches are needed to ensure its success and full alignment with society’s values.
Creating safe, socially aligned AGI will require diverse solutions, and with that in mind, we promote alternative AI research and support AI work that falls outside of the mainstream. There are many potential targets and complex obstacles on the path to safe and successful AGI — Astera seeks to complement existing research by supporting alternate pathways and new models. Astera provides a home to diverse AI research that would not otherwise have access to this awesome technology due to the prohibitive resources required.
About Obelisk
Obelisk is the Artificial General Intelligence laboratory at Astera. We are focused on the following problems: How does an agent continuously adapt to a changing environment and incorporate new information? In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards? How does higher level planning arise?
Our approaches to solving the problems are heavily inspired by cognitive science and neuroscience. To measure our progress, we are implementing reinforcement learning tasks where humans currently do much better than state-of-the-art AI.
Safety
Advances in AI have the potential to bring huge benefits, and as a nonprofit we are committed to ensuring that such benefits are shared by all. However we also take the risks very seriously, and as such members of our team have been contributing to AI safety for years:
- Our founder Jed McCaleb has been a major donor and advocate for AI safety research since at least 2015, and he started the Obelisk project largely to try to steer AGI research in a safe direction.
- Research scientist Randall O’Reilly has been writing about the safety implications of brain-like AGI since at least 2016.
- Steve Byrnes has been studying AGI safety since 2019. He continues that research as part of the Obelisk team.
We continually assess the risks of harm as our research progresses, but we believe our current research has very low risk of causing harm in the short term since we are focused on producing AI that has a subset of the abilities of a non-human animal such as a rat.
Beyond just avoiding danger, we hope to contribute positively to safety. There are probably multiple possible paths to AGI, and some may be more safe than others. We may discover that a particular brain-like AI is much safer or more dangerous than some other alternative. This information could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.
Although we hope any benefits of our work accrue to all, we will carefully consider the safety implications before releasing source code or publishing results.
Our Team Structure
Research Scientists
+Research scientists generate hypotheses and lay out high level approaches towards meeting our goals. Our research scientists have diverse intellectual backgrounds.
Research Engineers
+Research engineers fluidly move between helping our research scientists run experiments and generate new hypotheses, optimizing performance and resource utilization, and updating agent environments and training data.
Software Engineers
+Our software engineers build the infrastructure for our research, including:
- Physical computing infrastructure (i.e., computers, GPUs, switches).
- Cluster management software.
- Various task automation, such as automating hyperparameter search and regression testing.
- Experiment tracking systems.
- Training and testing environments.