Now that we've surveyed the current literature and some of the modeling tools available to us mere mortals, we can begin casting some of the information about phase encoding and attention into a more advanced light. Among the relevant concepts are wavelet transforms for phase encoding, and graphing for the short term selective attention related to scene processing and episodic memory. In the consideration of concepts like this in engineering terms rather than the neuroscience vocabulary, we're getting closer to the goal of neuroscience and machine learning informing each other.
In this section we'll highlight the importance of dynamics in brain function. One of the most promising areas of current research is the unraveling of brain electrical activity. We know from 200 years of research, that understanding this piece will take a lot more than putting electrodes in the brain. Invasive brain monitoring has a difficult history and there are many ethical issues associated with it, and the truth is, many scientists simply don't know enough math to make use of the results. Some advanced mathematical concepts are needed to understand the relationships between structure and data, and this is especially true in areas like signal processing and information representation. One should bear in mind that the Hopfield network is only 40 years old, and the free energy formalism is only 20 years old. (That's barely enough time for some grant money ;).
In this last section we'll introduce information geometry. This topic is vital for any serious study of neural networks. It's no joke though, you can't be a slouch if you want to tackle it. To date it is the only model that completely explains the function and behavior of the human retina, even down to the smallest perturbations of receptive field directionality with moving images (or during eye movements). It requires a thorough knowledge of differential geometry, as well as statistics and stochastic processes. Information is real time, and it's noisy. Somehow it has to be mapped into computational manifolds that are useful for neural networks. There is lots and lots of information, and it's all related to itself. We need a consistent encoding method, where objects and events (vertices) are the same as relationships (edges). Let's begin. |