Obelisk is an Artificial General Intelligence laboratory that draws on neuroscience and brain architecture to create new models of adaptive intelligence.
Our ultimate goal is to build an agent that can learn, adapt to a changing environment, create and use tools, and reason, all without destroying the things humans care about.
In the quest to create more advanced, general forms of AI, we often create simpler problems that seem to lie on the way to the goal. However, it is easy to fall into the trap of optimizing a solution for those problems and becoming sidetracked from your overarching goal. Astera is aware of these pitfalls and strives to always focus on the greater task: build a system that can learn and adapt to new environments. This is the essence of true general intelligence vs. current narrow AI solutions.
Research in computational neuroscience and its intersection with AI falls into various “camps.” One camp argues the brain is mostly pre-wired and seeks detailed connectomic data to understand that wiring. Another camp takes high-level inductive biases from human cognition to shape learning. Another camp uses particular interpretations of particular parts of the brain and uses these models to build general purpose learning systems. (Obelisk shares the most commonalities with this last camp, though it differs in key technical details.)
Finally, there is the “machine learning” camp, which has risen to dominance in recent years. This camp believes powerful learning algorithms are sufficient and we should learn the “bitter lesson” of modern AI — simple architectures can absorb and scale with enough computation (increasingly large-scale deep neural networks trained by error backpropagation) and data to achieve any task.
We feel that the mainstream ML approach, although powerful, does not capture the full diversity of approaches that could lead to fully adaptive AI. Machine learning systems — primarily deep-learning artificial neural networks — have made tremendous advances in recent years, conquering a host of hard problems. However, they still face severe limitations and serious challenges on their path to truly adaptive ability. Massive ANNs trained on large datasets and using huge amounts of computational power have proved excellent at recognizing and expressing structural regularities, but they remain fragile and inflexible, easily “confused” by new data or situations. Whole classes of cognition remain out of reach: contextual awareness, reasoning, and common sense. Additionally, a vast range of sensorimotor abilities have proven unattainable, skills that are basic for children or even simple animals. It is not evident that more data, more compute, and larger models will ever break through these limitations.
Most ML research ignores the world’s only working model of true intelligence: the human brain. The brain is an incredibly complex, messy system and our knowledge of it remains incomplete, so we understand the tendency to look away. (Even nominally neuroscience-inspired approaches often take only high-level cognitive principles and try to encapsulate them in a mathematically crisp way.) However, we believe the brain remains a critical resource on our journey towards truly adaptive artificial intelligence. The brain is astonishingly flexible, efficient, and fast. It makes highly successful decisions and actions with minimal data in infinitely variable environments. It has much more to offer to the field of artificial intelligence.
How We Are Different
Obelisk seeks to abstract the underlying “brain architecture” that makes neurobiology so successful at such diverse processes. The challenge of neuromorphic AI is finding the right level of abstraction between whole brain emulation and a simple mathematical model. We feel like there is a tractable solution somewhere in the middle, one which leverages computational power while also incorporating much more of what we know about the diversity and structure in neuroanatomy.
AI inspired by “brain architecture” will use highly structured and diverse innate subsystems (memory systems and training signals of multiple kinds) to guide and train powerful general-purpose learning algorithms, combining strengths of many approaches in one model. These component parts must fit into a single (at least partially understandable) system at the neuroscience level, and then inform optimal implementations of the same training procedures, pre-built structures and cost functions at the engineering level. This approach is difficult because one has to follow two “masters” at once — raw task performance and fidelity to neuroscience. Most research follows just one. However, by curating the right kind of research environment, we believe it is possible to generate and test new ideas that leverage both computational power and detailed models of brain architecture.
For over 20 years, Obelisk Director of Science Randall O’Reilly, has been researching computational neuroscience, building algorithmic representations of discrete brain functions, and determining the best balance of abstraction/fidelity within an artificial thinking system. He has made considerable progress. Obelisk was created to bring a more directed and engineering-based approach to his findings, to scale and unify his models, and to demonstrate and improve their capabilities.
We are seeking bold, creative computer engineers and scientists interested in tackling one of the most exciting and important challenges of our time.
Our model includes a coherent architecture combining error driven, predictive and associative learning, short term memory, multiple kinds of highly specific and evolved training signals, novel forms of reinforcement learning that go beyond the standard temporal difference learning paradigm, and specific affordances for operations like variable binding.
We will continue to research, experiment, and improve our existing model, filling in our gaps in understanding of how the brian actually produces intelligent behavior.
Our knowledge of how the brain works is still incomplete. There are certain questions that would inform our computation model and potentially accelerate the process of building a functional system. We will work with various neuroscience labs to pose and answer these questions. We already have a fruitful collaboration with Karen Zito’s lab at UC Davis, testing a central hypothesis about how temporal differences in neural activity drive changes in synaptic strength, providing the primary engine of learning in our models.
In addition to basic research, the lab will be working on the following large engineering projects. Engineering and Research are tightly coupled, each feeding back and answering questions for the other.
Unifying the Model
Dr. O’Reilly has many different models of all kinds of subsystems in the brain — attention, motivation, memory, learning, etc. We need to make these systems work with each other and combine them into a large unified model in order to realize their full potential. This will hopefully create a system that is much more powerful than the sum of its parts — one which can solve many problems that aren’t covered by other AI approaches.
We need to create a training environment that scales in complexity as our model becomes more capable. It must maintain the overarching goal for the project in mind and ensure that we are always testing and optimizing towards fully adaptive intelligence as opposed to getting sidetracked by simpler problems with narrow solutions.
In addition to within-lifetime learning, evolutionary computation can be used to help set hyperparameters of a network. There are many hyperparameters and these will only increase drastically as we start to unify the model. We need to build a framework that can automatically search the space of possibilities. Testing the permutations in our training environment will determine which leads to the best outcomes.
We will have to scale in a few dimensions. We will want to run larger models. As the model’s capabilities increase we will need to have more complex/computationally expensive training environments. The demands of the evolutionary framework will also only increase — thus our computational needs will continuously grow. We need a team to manage and build this out.
We currently have a good system for visualizing smaller models. However, as the model complexity and size increases, we will need to develop new tools to inspect and understand how the system is working and identify places where there are problems.
Current Model Principles
The following are details about the core principles behind the current model that have strong potential for transformative computational abilities, beyond the scope of what is currently being explored in other frameworks:
General-purpose error-driven learning, approximating backpropagation, is realized through temporal-differences in activation state throughout the network (O’Reilly, 1996), and is now fully functional using discrete spiking neurons. Previous limitations from intrinsic positive feedback loops in bidirectionally-connected networks have been overcome, enabling robust learning to progress over time. Biologically, this depends on multiple time scales of synaptic adaptation, including structural spine-level processes operating on day-level scales, extending over developmental time-scales through even slower dynamics of synaptic proliferation and pruning.
Spiking provides an “extra dimension” (time) for encoding information, which should be critical for driving dynamic attentional focus across the brain architecture to flexibly couple and decouple information processing across distributed brain areas. This has long been theorized and ample neuroscience evidence exists, but capturing such dynamics in the context of powerful learning models is challenging, but now within reach, and will be a major initial focus.
Truly flexible, adaptive cognitive function depends on learning being able to shape the engagement of different brain areas based on current task demands. The ability of learning and cognitive control areas to drive attentional focus is the “lever,” but controlling this lever requires learning an internal “self model” to drive goal-driven action selection and cognitive sequencing. In effect, the human brain is a self-programming general-purpose information processing system, with powerful parallel processing subsystems organized sequentially over time to achieve arbitrary cognitive functions.
Existing models of prefrontal cortex, basal ganglia, dopamine, and other motivational systems, together with powerful episodic memory capacity via the hippocampus, provide the foundation for this self-programming capacity. Predictive learning of internal models of the external environment and an internalized “meta model” of the self drives the development of representations and dynamics that bootstrap all of this machinery into adaptive, flexible, successful behavior and cognitive function.
A longstanding challenge for all neural network models has been multi-task learning: the ability of one system to solve multiple distinct problems using the same hardware. This is challenging in part because learning on one task tends to catastrophically interfere with prior learning on other tasks, due to the opportunistic nature of error backpropagation learning. The multi-time-scale spine-based plasticity mechanisms in our model have the potential to overcome these problems, imparting a longer-term stability in the form of a structural “skeleton” within which faster learning operates. Furthermore, the ability of attentional dynamics to flexibly engage and (of critical importance) leave undisturbed different elements of the overall architecture should be important as well.
Each of these model components represent unique features of our approach relative to other existing frameworks, and reflect the cumulative results of 25 years of research. The recent breakthroughs in spiking-based learning mechanisms, coupled with advances in predictive learning and several component brain models, provide a unique opportunity to advance this framework and truly test its potential for achieving AGI.
Differences from Other Camps
The following technical points describe how the Obelisk model is different from other AI approaches:
Most existing AI models are trained by “end to end backpropagation” which means that the modeler is externally driving all of the learning based on high-level target outputs and objective functions, instead of the model itself driving its own learning. Thus, a key goal is to achieve autonomous AGI, based on the concept of an adaptive, self-programming cognitive architecture.
Many existing models that address higher-level cognitive function employ some form of symbol processing based on role-filler bindings, for example through Plate’s holographic reduced representations via circular convolution over tensor products (e.g., Eliasmith, Smolensky). These are largely neural implementations of existing symbol processing capacities, that can be manually configured to solve challenging problems, but fail to exhibit the critical self-programming autonomy that characterizes human intelligence. Furthermore, in most such models, it is not clear that re-implementing symbol processing in neurons gains much over the existing purely symbolic implementation.
Eliasmith’s spiking models are perhaps the most prominent examples of spiking networks that achieve significant cognitive function. However, these models are engineered based on rate-code-like continuous-valued ODE equations, which are then translated into an equivalent spiking form, and no significant backpropagation-level form of on-line learning takes place within the spiking network. This eliminates the ability of the learning mechanism to take advantage of the unique signaling properties of spiking relative to rate codes. By contrast, our framework learns directly using spiking signals in an on-line manner, enabling the models to shape complex dynamics in an opportunistic fashion to take full advantage of the extra time dimension afforded by spikes.
See our page on AI Safety.