AGI

Obelisk

Obelisk is an Artificial General Intelligence laboratory that draws on neuroscience and brain architecture to create new, multi-system models of intelligence.

obelisk
Our Goal

Our ultimate goal is to build an agent that can adapt to a changing environment, create and use tools, learn, and reason — without harming the things that humans care about. 

Along the road to general intelligence, we hope to develop concepts, techniques, and methods that advance machine learning.

Join Our Team
obelisk
Our Approach

Obelisk draws inspiration from neuroscience and the multi-system organization of the brain to design and build new multifunctional models of intelligence. We seek to abstract the underlying “brain architecture” that makes neurobiology so successful at such diverse processes.

Advances in neuroscience theory often translate into AI breakthroughs. Discoveries about the function of synapses led to the first artificial neural networks. Insights from the visual cortex gave us convolution neural networks that excel at vision tasks. Insights from psychology helped lead to reinforcement learning systems that could master board, card, and video games. Ideas about attention have led towards transformers proficient in language tasks. Typically, each advance has accelerated one particular aspect of machine learning targeting one narrow task.

We believe that neuroscience theory has become sufficiently sophisticated to pursue brain-like artificial intelligence, with many subsystems working together, using shared attention and memory resources. Obelisk aims to create AI models inspired by neurobiological brain architecture, incorporating many different design patterns into a unified system. The benefits of this approach potentially include agents that can quickly adapt their understanding to changing circumstances, the capacity to adapt to out of domain scenarios with limited data, as well as greater sample efficiency for training.    

obelisk
How We Are Different

Deep-learning artificial neural networks have made tremendous advances in recent years, overcoming a host of hard problems. As a result, a majority of AI research and funding has been focused in this area. Massive ANNs trained on large datasets and using huge amounts of computational power have proved excellent at recognizing and expressing structural regularities. However, this approach still faces limitations and challenges on the path to AGI. Deep learning systems remain fragile and inflexible, easily “confused” by new data or situations. Whole classes of cognition remain illusive: contextual awareness, reasoning, and common sense. Many sensorimotor abilities have proven unattainable, skills that are basic for children or even simple animals. It is not evident that more data, more compute, and larger models will necessarily break through these limitations.

At Obelisk we believe that the brain still has many powerful lessons to teach us. Much ML research ignores the world’s only working model of true intelligence: the human brain. The brain is astonishingly flexible, efficient, and fast. It makes highly successful decisions and actions with minimal data in infinitely variable environments. It has much more to offer to the field of artificial intelligence. 

By synthesizing the latest neuroscience discoveries as well as the results of our own basic research, Obelisk seeks to create multiple diverse algorithms that perform discrete functions analogous to the systems inside living consciousness. We hope to connect multiple systems within a new AI architecture that can combine the flexible adaptations of animal brains with the full power potential of machine learning. 

While our approach is different from other major AI research projects, we view it as purely complementary, and hope to contribute to and learn from the incredible progress in mainstream deep learning projects.

 

Scientific Background

For over 20 years, Science Director Randall O’Reilly has been researching computational neuroscience, building algorithmic representations of discrete brain functions and determining the best balance of abstraction/fidelity within an artificial thinking system. Obelisk was created to bring a more directed and engineering-based approach to the research of Dr. O’Reilly and his colleagues, to scale and unify their models, and to demonstrate and improve those models’ capabilities. 

As we build up our knowledge of neurological systems and build out our own artificial intelligence architecture, we hope to draw on theoretical advances from across the fields of neuroscience and AI. We collaborate with diverse science and engineering labs to answer critical questions, openly share our work, and incorporate all the best ideas on how to convert neuroscience systems into promising algorithmic models.

 

Why Now?

Attempts to convert theories of the brain into functional machines are nothing new. However, we believe we have reached a critical moment in both neuroscientific knowledge and computational power that provides unprecedented opportunity for radical progress. Theoretical neuroscience has been supercharged by improvements in neuroimaging, and computational models now incorporate circuit-level understanding of the brain. Machine learning has successfully drawn on many of these breakthroughs to create highly successful models of perception, reward pursuit, language generation, and other narrow tasks. We now have remarkably effective individual modules for specific forms of intelligence. The task we face now is to integrate these discrete operations into a multi-system, multi-functional intelligence. Multiple disciplines have developed to the point that Obelisk can now begin designing and building this new model.

Our Work

In order to push the boundaries of artificial intelligence, we must test and develop new theories about how intelligence works at functional and computational levels. We have learned a great deal about neuroscience, cognitive science, and machine learning, but more basic research is necessary. 

Dr. Randall O’Reilly is our scientific lead and the foundational inspiration for much of our work, but we strive to incorporate ideas of theorists from across all relevant fields.

Neuroscience

Our knowledge of how the brain works is still incomplete. Each new discovery can inform our computational model and potentially accelerate the process of building a functional AGI system. Obelisk will work with various neuroscience labs to pose and answer these questions. For example, we have a fruitful collaboration with Karen Zito’s lab at UC Davis, testing a central hypothesis about how temporal differences in neural activity drive changes in synaptic strength — a critical discovery that powers the learning mechanism in our models. 

Computation Model

Our model includes a coherent architecture combining error driven, predictive and associative learning, short term memory, multiple kinds of highly specific and evolved training signals, novel forms of reinforcement learning that go beyond the standard temporal difference learning paradigm, and specific affordances for operations like variable binding.

We will continue to research, experiment, and improve our existing model, filling in gaps in our understanding of how the brian actually produces intelligent behavior.

Basic research and modeling is only useful insofar as it can be applied to functional algorithms that can perform useful computation. We aim to produce useful open source code and build applications that demonstrate these models can succeed at challenging real-world problems.

Unifying the Model

Dr. O’Reilly has previously developed many different models of multiple brain subsystems such as attention, motivation, memory, learning, etc. (Code available here.) We need to make these systems work with each other and combine them into a large unified model in order to realize their full potential. This will hopefully create a system that is much more powerful than the sum of its parts — one which can solve problems that elude other AI approaches.

Training Environment

We are developing training environments that scale in complexity as our model becomes more capable. These environments are designed to provide metrics for true general intelligence — the ultimate goal of the project — and ensure that we are testing and optimizing towards general intelligence as opposed to simpler problems with narrower solutions. 

Evolution Framework

Our AI models will undergo an evolutionary process based on their performance in virtual environments. This evolution will be driven by various hyperparameters exhibited during the “lifespan” of the model. There are many hyperparameters involved and the number will dramatically increase as we unify the various modeled brain regions into a single system. We need to build an evolutionary framework that can automatically search the space of possibilities and reveal the most promising paths for new permutations and configurations.

Scaling Computation

Our work needs to scale in multiple dimensions. As we run larger models, and as the models’ capabilities increase, we will require more complex/computationally expensive training environments. As the demands of the evolutionary framework increase, our computational needs will continuously grow. We will need a team to manage and build these systems.

Visualization

We currently have an effective system for visualizing smaller models. However, as our models’ size and complexity increases, we will need to develop new tools to inspect and understand how these models are working and to identify problems.

Current Model Principles

 

The following are core principles behind our current model. Each has strong potential for transformative computational abilities beyond the scope of mainstream ML exploration.

  • New neuroscience research has developed models of synaptic spike timing and temporal-difference learning. These are distinct from rate coding and backpropagation methods used in most machine learning systems.  These neural-inspired models can produce robust improvements to learning systems. 
  • Multiple time scales within neural networks allow for more nuanced and dynamic learning processes. Within neurobiology,  synaptic adaptations include structural spine-level processes operating on day-level time-scales and developing through even slower dynamics such as synaptic proliferation and pruning.
  • Spiking provides an “extra dimension” (time) for encoding information, which is critical for driving dynamic attentional focus across a brain architecture. This allows the system to flexibly couple and decouple information processing across distributed brain areas. Neuroscience provides ample evidence and theory of spike timing mechanics, and the challenge of incorporating these systems into a powerful learning model is not within reach. 
  • Truly flexible, adaptive cognitive function requires learning an internal “self model” to drive goal-driven action selection and cognitive sequencing.
  • In effect, the human brain is a self-programming, general-purpose, information-processing system, with powerful parallel processing subsystems organized sequentially over time to achieve arbitrary cognitive functions.
  • A learning machine with self-programming capacity can be built on existing models of prefrontal cortex, basal ganglia, dopamine, and other motivational systems, together with powerful episodic memory capacity via the hippocampus.
  • Successful adaptive behavior and cognitive function derive from predictive learning concerning internal models of both the external environment and a “meta model” of the self. 
  • Neural network models struggle with multi-task learning, i.e. the ability of one system to solve multiple distinct problems using the same hardware, because learning on one task tends to catastrophically interfere with prior learning on other tasks, due to the opportunistic nature of error backpropagation learning. The multi-time-scale spine-based plasticity mechanisms in our model have the potential to overcome these backpropagation limitations and provide longer-term stability in the form of a larger architecture within which faster learning operates.
  • The model should contain attentional capacity to engage, modify, and (of critical importance) leave undisturbed different elements within its own architecture.

Each of these model components represent critical features of Obelisk’s approach towards building AI and AGI models — ones that differentiate us from most mainstream ML projects. They reflect the cumulative results of 25 years of research The recent breakthroughs in spiking-based learning mechanisms, coupled with advances in predictive learning and multi-system brain models, provide a unique opportunity to advance this AI framework and to test its potential for achieving AGI.

Join the Obelisk Program

Astera is seeking intrepid individuals from across cognitive science and AI who aren’t satisfied with the pace of progress in modern ML and want to build better tomorrows.