AGI

Charting a new path towards thinking machines

AGI AGI

AGI at Astera

Artificial General Intelligence will be a transformative technology in human history, one that can improve all other areas of human investigation and endeavor. At Astera, we believe AGI can bring good to humanity and that multiple approaches are needed to ensure its success and full alignment with society’s values.

Creating safe, socially aligned AGI will require diverse solutions, and with that in mind, we promote alternative AI research and support AI work that falls outside of the mainstream. There are many potential targets and complex obstacles on the path to safe and successful AGI — Astera seeks to complement existing research by supporting alternate pathways and new models. Astera provides a home to diverse AI research that would not otherwise have access to this awesome technology due to the prohibitive resources required.

About Obelisk

Obelisk is the Artificial General Intelligence laboratory at Astera. We are focused on the following problems: How does an agent continuously adapt to a changing environment and incorporate new information? In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards? How does higher level planning arise?

Our approaches to solving the problems are heavily inspired by cognitive science and neuroscience. To measure our progress, we are implementing reinforcement learning tasks where humans currently do much better than state-of-the-art AI.

Safety

Advances in AI have the potential to bring huge benefits, and as a nonprofit we are committed to ensuring that such benefits are shared by all. However we also take the risks very seriously, and as such members of our team have been contributing to AI safety for years:

We continually assess the risks of harm as our research progresses, but we believe our current research has very low risk of causing harm in the short term since we are focused on producing AI that has a subset of the abilities of a non-human animal such as a rat.

Beyond just avoiding danger, we hope to contribute positively to safety. There are probably multiple possible paths to AGI, and some may be more safe than others. We may discover that a particular brain-like AI is much safer or more dangerous than some other alternative. This information could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.

Although we hope any benefits of our work accrue to all, we will carefully consider the safety implications before releasing source code or publishing results.

Our Team Structure

Research scientists generate hypotheses and lay out high level approaches towards meeting our goals. Our research scientists have diverse intellectual backgrounds.

Research engineers fluidly move between helping our research scientists run experiments and generate new hypotheses, optimizing performance and resource utilization, and updating agent environments and training data.

Our software engineers build the infrastructure for our research, including:

  • Physical computing infrastructure (i.e., computers, GPUs, switches).
  • Cluster management software.
  • Various task automation, such as automating hyperparameter search and regression testing.
  • Experiment tracking systems.
  • Training and testing environments.