We believe AGI can be extremely beneficial to humanity but that there is also a real risk of it destroying the things humans care about. And even in the most optimistic scenarios, humans as we know them will have to change if they are to remain relevant in the next decades. The goal of Obelisk is to facilitate these changes with the hope of bringing humans along into this new world of ultra-capable machines.
Obelisk consists of 3 complimentary efforts: Neuro, AGI, and Safety.
Our effort to illuminate how patterns of neural activity become conscious experience and engineer a predictive science of thought and perception.
With AI on the horizon it is becoming increasingly urgent to better understand how the human brain works. The neuro effort at Astera is designed to help answer the questions and address the technical challenges that we believe will provide the best chance for humans to flourish in this new world.
This is Astera’s ambitious initiative to decode how patterns of neural activity across 87 billion neurons become the thoughts, perceptions, and memories that constitute conscious experience. Built on the hypothesis that the brain uses compositional architecture—combining elemental units through neural syntax—we aim to transition neuroscience from observation to engineering by reading from and writing to neural circuits at scale.
From this program, we hope to understand:
- How the brain represents and manipulates objects and concepts. The more we understand how human intelligence works the better shot we will have of creating machine intelligence. And hopefully the better shot we will have of making it roughly aligned with human values.
- What neural circuitry gives rise to consciousness. We currently have only a shallow understanding of consciousness. But this will become extremely important as there are intelligent agents other than humans out in the world. We will want to know if we are enslaving conscious beings. We will want to know that if we are ceding the future, the entities we are ceding it to are also conscious. It is also important for upload and other brain-augmentation techniques, so we can determine whether the uploaded mind remains conscious.
- Better techniques to read and write from the brain.
- Ways to merge and upload consciousness. The only hope that humans stay relevant into the future is through some kind of brain augmentation or upload.
There are many potential targets and complex obstacles on the path to safe and successful AGI. Astera seeks to complement existing research by supporting alternate pathways and new models. By providing significant resources, Astera provides a home to diverse AI research that would otherwise be neglected.
Our AGI effort is pursuing an exploratory, neuroscience-informed approach to engineering intelligence.
Astera enables the Obelisk team to focus on basic research and take a long-term view. Obelisk is unconstrained by the need to secure funding, garner profit, or publish results. The team also has access to significant computational resources.
Our research and experiments are focused on the following problems:
- How does an agent continuously adapt to a changing environment and incorporate new information?
- In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards?
- How does higher level planning arise?
Our approaches are heavily inspired by cognitive science and neuroscience. To measure our progress, we implement reinforcement learning tasks where humans currently do much better than state-of-the-art AI.
A pillar of Astera’s philosophy is openness and sharing, grounded in a belief that scientific progress should benefit everyone. At the same time, we take the risks associated with artificial intelligence extremely seriously. As our research advances, we continually assess potential harms and carefully evaluate the implications of releasing code or publishing results.
One of the most important ways we hope to contribute to safety is by improving our scientific understanding of intelligence itself. Some paths to advanced AI are likely safer than others, and discovering that a particular brain-like approach is significantly more or less risky could meaningfully shift global AI progress toward safer directions. A deeper understanding of how intelligent systems work—biological or artificial—is essential for developing tools that can reliably monitor, interpret, and steer them.
Simplex is our largest effort in this direction. Their work aims to build a real science of intelligence: a rigorous theory that explains how networks, artificial and biological, organize information internally and how that structure drives thoughts, behaviors, and computation. Simplex has developed a geometric framework for uncovering the internal structures that produce model capabilities and is now scaling this approach to frontier systems, building tools for unsupervised discovery, and beginning to bridge insights to neuroscience. The long-term goal is a shared scientific foundation for understanding and directing intelligence toward better outcomes for humanity.