AGI Program
Charting a path towards thinking machines
Artificial General Intelligence will be the most transformative technology in human history, one that can improve all other areas of investigation and endeavor.
We believe AGI can bring good to humanity and that many approaches are needed to ensure success and alignment with society’s values.
Creating AGI that doesn’t destroy the things humans care about will require diverse solutions. With that in mind, we promote alternative AI research and support AI work that falls outside the mainstream.
There are many potential targets and complex obstacles on the path to safe and successful AGI. Astera seeks to complement existing research by supporting alternate pathways and new models. By providing significant resources, Astera provides a home to diverse AI research that would otherwise lack access to this awesome technology.
Obelisk: Astera’s AGI Laboratory
Obelisk is a team of researchers who are pursuing an exploratory, neuroscience-informed approach to engineering AGI.
Astera enables the Obelisk team to focus on basic research and take a long-term view. Obelisk is unconstrained by the need to secure funding, garner profit, or publish results. The team also has access to significant computational resources.
Our research and experiments are focused on the following problems:
1
How does an agent continuously adapt to a changing environment and incorporate new information?
2
In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards?
3
How does higher level planning arise?
Our approaches are heavily inspired by cognitive science and neuroscience. To measure our progress, we implement reinforcement learning tasks where humans currently do much better than state-of-the-art AI.
AGI Safety
A pillar of Astera’s philosophy is openness and sharing. That said, we take the risks associated with artificial intelligence very seriously.
We continually measure and assess the risks of harm as our research progresses to ensure that we avoid danger. Beyond that, however, we hope to contribute positively to safety. Some paths to AGI are probably more safe than others. Discovering that a particular brain-like AI is much safer or more dangerous than some other alternative could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.
With this in mind, we will carefully consider the safety implications before releasing source code or publishing results.
Obelisk Leadership
Jed McCaleb is the founder of the Astera Institute. His focus is on accelerating progress, expanding potential, and increasing net happiness. He is also co-founder and Chief Architect of the Stellar Development Foundation, the organization that maintains the open-source Stellar Network.
Gary leads engineering for Obelisk. Previously at Microsoft he led teams working on the Open Neural Network Exchange ecosystem. Prior to that, he worked at Google where he led teams working on engineering productivity for the Fuchsia operating system and Google’s search infrastructure.