AI Safety
A pillar of Astera’s philosophy is openness and sharing,

grounded in a belief that scientific progress should benefit everyone. At the same time, we take the risks associated with artificial intelligence extremely seriously. As our research advances, we continually assess potential harms and carefully evaluate the implications of releasing code or publishing results.

One of the most important ways we hope to contribute to safety is by improving our scientific understanding of intelligence itself. Some paths to advanced AI are likely safer than others, and discovering that a particular brain-like approach is significantly more or less risky could meaningfully shift global AI progress toward safer directions. A deeper understanding of how intelligent systems work—biological or artificial—is essential for developing tools that can reliably monitor, interpret, and steer them.

Simplex

Simplex is our largest effort in this direction. Their work aims to build a real science of intelligence: a rigorous theory that explains how networks, artificial and biological, organize information internally and how that structure drives thoughts, behaviors, and computation. Simplex has developed a geometric framework for uncovering the internal structures that produce model capabilities and is now scaling this approach to frontier systems, building tools for unsupervised discovery, and beginning to bridge insights to neuroscience. The long-term goal is a shared scientific foundation for understanding and directing intelligence toward better outcomes for humanity.