So what did we learn from surveying a bunch of modeling software? First, we learned about some important areas where the modeling tools fall short - most notably network geometry, where the definition of modules and their interconnections is cumbersome at best. We also learned about the relationship between attention, memory, phase encoding, and graphs - which is a big deal in current research, a fertile ground for publication. We also saw the state of the art in the "presentation graphics" for neural networks - and compared to the AI-enhanced images of fruit flies, the presentation graphics are sadly lacking. They are mundane, when they could be slick. If you're a graphics expert or even a reactive GUI designer, there are some very cool (and easy) projects available in this space.
We were unable to model a full timeline embedding, because we couldn't find a neural network simulator that can handle linear timelines, compactified geometries, and thermodynamic networks at the same time. Even trying to implement a basic attention system was difficult for us, because we didn't want to use 90 layers of multi-headed transformers to accomplish what should be a very simple task - selecting a subset from a set according to some criteria.
I'll let you in on a secret. It's advance, unpublished information. I wrote a simulator myself, that handles what we need. It does thermodynamic networks at the same time it solves differential equations. It's very, very fast, the core engine is written in x86 assembler. It does its own memory management, utilizing pages of input and output events. It interfaces with Annie, and I'm currently working on getting it to interface with other tools. It's not mature like Nest, it's bleeding edge. There are still some bugs, but it does stuff no other simulator can do. Best of all, it handles geometry. Intrinsically, naturally, from the ground up. It builds very realistic neural nuclei, that have all the basic motifs associated a connectome. And, I'm starting to use this tool to do exactly what we're talking about on these pages - inform the neuroscience and machine learning communities about new ways of doing things that show promise but aren't perhaps directly and immediately related to the relentless push for performance. Such areas include, for example, the confluence between dynamics and predictive coding. To begin with, such an investigation will be necessarily slow because we don't know what we're doing yet. But we know there are certain motifs that are better suited to predictive architectures than others, and it's much faster and easier to investigate with simulators than it is with living creatures. When the code is ready I'll post it on GitHub as open source. In the meantime, let me show you what I'm doing with it.
Visual System Results
Oculomotor System Results
Visually Guided Reflexes
Attention
Visual Memory
Visual Cognition
Scene Mapping
Navigation Next: Criticality |