This page contains a list of questions that an in-depth knowledge of neuroscience – and related areas such as psychology, and computational neuroscience – may help to answer.
These are areas where expert knowledge (at any scale – microscopic, to behavioural or even group dynamics) could reveal clues about micro-scale computational implementation in the brain. For example, the Bellman Equation has been shown to have good agreement with reward influence in animal and human behavioural studies.
We believe that thinking or talking about these issues is likely to lead to useful insight. In particular, exceptional cases – such as disease, surgery, or abnormalities of any kind – are very likely to reveal clues.
Of course, it may be that some or all issues can’t be approached in this way. These can then be quickly dismissed allowing us to focus on the issues where neuroscience is relevant.
These questions are motivated by some background assumptions. For an overview of these, see the start of our ‘how to build a general intelligence’ series.
1. Integration of bi-directional information
This can also be conceived as an inference task, implying an ongoing integration of prior knowledge and new observations. It can also be thought of as a combination of top-down (e.g. historical evidence, learned models, and memory) and bottom up perception (i.e. information from sensors). A possible theoretical basis is Loopy-Belief Propagation. But we don’t have to be very specific about how it happens. There are two key objectives:
- Improve robustness of perception and handle uncertainty, partial observability
- Incrementally and practicably integrate larger perceptual fields (more data) in a coherent, hierarchical model, thus avoiding the curse of dimensionality
This process is inherently prone to catastrophic feedback effects, in which case data flow will break down in one or more of the following ways. Human intelligence is remarkably robust to these failure modes, although there may be conditions such as depression that are analogous:
- Small cycles (loops) in sub-graphs, disconnected components that don’t influence each other or consider external real world events
- Dominant data flow in one direction only (e.g. dominant feedback implies external evidence not considered over internal; vice-versa dominant feedforward implies “living in the moment”; failing to learn from or consider prior experiences – c.f. repeating failed strategies)
- Local models incoherent with other local models. Perhaps this is analogous to cognitive dissonance
- Which established neuroscience models describe this integration process?
- Are there theories that explain how integration is balanced and made robust?
- Are there diseases that disturb or unbalance this process?
- What are likely symptoms or effects of failure modes – e.g. behavioural?
- Are there any therapies for these conditions? How do they actually work?
- Is anything known about the micro-scale implementation (i.e. connectionist, or biochemical signalling models)
2. Variable-order prediction capabilities
Assume a world modelled as a Markov Chain, or (assuming also an agent) Markov Decision Process (MDP). Predict t+1 given a potentially unbounded history of observations from t-0, t-1, … , t-n. In practice recalled history is bounded by memory capacity. The trick is to prioritize the retention of observations or hidden states that have optimum predictive power or other utility.
A simple first-order model would only use the state at time t to predict t+1. A variable order model adapts history content to optimize prediction accuracy (and other objectives). In the case of an agent, better prediction allows more optimal action selection.
Given a sequence [X,A,B,C,Y,A,B,C] (repeating), predict the next letter given prior letters.
This is a task that humans can do easily, with practice and repetition. However, when patterns become more complex, it becomes difficult for humans to do well. Perhaps it is easier with sequences of sound rather than images?
- What is known about human capabilities of this sort? Obviously we can handle simple tasks such as the example above. What are the limits?
- Is this a universal feature of the Neocortex? Otherwise, which brain structures are involved? (Perhaps the Neocortex has some rudimentary ability, that improves with practice / exposure, and that rapid learning of patterns also uses other brain structures.)
- Are we capable of learning to do this better? Are there known limits (e.g. Working Memory has a capacity of 5-7 similar items)?
- How fast do we learn complex variable-order sequences? To what extent does repetition or other training help? How do we learn best?
- When we are told the clues to look for, how does this change our behaviour and abilities?
- Does this ability make use of working memory? Is it possible to perform these tasks without working memory?
- Does ability improve with practice? Under what conditions?
- Are our abilities dependent on sensory modality?
3. Mental-Simulation Control Behaviours
There is evidence that many sophisticated animal and human behaviours involve the use of mental simulation for planning and action selection. Further, the evidence seems to suggest that alternative trajectories – plans – are sequentially explored and then evaluated, prior to selection and execution of a simulated plan in the real world. We would like to know more about the control of mental simulations so that we can build an artificial version.
When planning actions, there are a number of functions that must be performed. A non-exhaustive list includes:
- When does a plan-sim start?
- How are the initial conditions (start state) defined?
- How do you deal with forks? Are trees of potential plans explored?
- How is the anticipated reward stored for comparison with other plans?
- How do you remember the details of explored plans?
- When does the exploration-simulation end? How is this determined? What events can trigger the end?
- How are temporary goals defined and satisfied?
- What brain structures are involved?
4. Supervised execution of simulated behaviour
Extending the notion of mental simulation for the purpose of planning, when trying to execute a selected, simulated plan some additional questions arise:
- How is a plan picked and maintained as the selection during execution?
- How are execution-failures or loss-of-relevance detected?
- How is the simulated plan tied to the real-world equivalent?
- How do we measure rewards against simulated rewards, and update expectations?
In particular there are representational questions about the ‘unfolding’ of abstract plans into more detailed concrete form during execution. If we assume the Neocortical representation is hierarchical, and the details are not fully defined at the time of plan selection, then there must be a process for turning abstract sequences of behaviour into specific, detailed actions to satisfy the abstract objectives. Errors in the translation process may be revealing. For example, I managed to take a picture of a time when I performed a task correctly, but some of the details got lost in translation (see picture):
An innovation is a new behaviour that is produced by an agent (including a person or animal) either in response to external pressures, or due to an internal drive to vary behaviour.
What are the most established and highly regarded computational models of innovation in humans? What do they assume about the underlying computational models of Neocortex and other brain structures?
Possible source of innovation
- Via re-combination of novel experiences (only?)
- Generation of unobserved action-sequences is possible?
- The role of simulation and imitation (transfer of actions from externals)?
- Are innovations simulated first?
- The role of randomness – how much, how often, how instrumental?
- The Exploration-exploitation dilemma