This post asks some questions about the agency of hierarchical action selection. We assume various pieces of HTM / MPF canon, such as a cortical hierarchy.
Agency
The concept of agency has various meanings in psychology, neuroscience, artificial intelligence and philosophy. The common element is having control over a system, with varying qualifiers regarding the entities who may be aware of execution or availability of control. Although “agency” has several definitions, let’s use this one I made up:
An agent has agency over a state S, if its actions affect the probability that S occurs.
Hierarchical Selection thought experiment
Now let’s consider a hierarchical representation of action-states (actions and states encoded together). Candidate actions can therefore be synonymous with predictions of future states. Let’s assume that actions-states can be selected as objectives anywhere in the hierarchy. More complex actions are represented as combinations or sequences of simpler action-states defined in lower levels of the hierarchy.
Let’s say an “abstract” action-state at a high level in the hierarchy is selected. How is the action-state executed? In other words, how is the abstract state made to occur?
To exploit the structure of the hierarchy, let’s assume each vertex of the hierarchy re-interprets selected actions. This translates a compound action into its constituent parts.
How much control does higher-level selection exert over lower-level execution? For simplicity let’s assume there are two alternatives:
1. High level selection biases or influences lower level (weak control)
2. Lower levels try to interpret high level selections as faithfully as possible (strong control)
We exclude the possibility that higher levels directly control or subsume all lower levels due to the difficulty and complexity of performing such a task without the benefit of hierarchical problem decomposition.
If high levels do not exert strong control over lower levels, the probability of faithfully executing an abstract plan should be small due to compound uncertainty at each level. For example, let’s say the probability of each hierarchy level correctly interpreting a selected action is x. The height of the hierarchy h determines the number of interpretations between selection of the abstract action and execution of relevant concrete actions. The probability of an abstract action a being correctly executed is:
P(a) = xh
So for example, if h=10 and x=0.9, P(a) = 0.34.
We can see that in a hierarchy with a very large number of levels, the probability of executing any top-level strategy will be very small unless each level interprets higher-level objectives faithfully. However, “weak control” may suffice in a very shallow hierarchy.
Are abstract actions easy to execute?
Introspectively I observe that highly abstract plans are frequently and faithfully executed without difficulty (e.g. it is easy to drive a car to the shops for groceries, something I consider a fairly abstract plan). Given the apparent ease with which I select and execute tasks with rewards delayed by hours, days or months, it seems I have good agency over abstract tasks.
According to the thought experiment above, my cortical hierarchy must either be very shallow or higher levels must exert “strong control” over lower levels.
Local Optimisation
Local processes may have greater biological validity because they imply less difficulty/specificity routing relevant signals to the right places. Hopefully the amount of wiring is reduced also.
What would a local implementation of a strong control architecture look like? Each vertex of the hierarchy would receive some objective action-state[s] as input. (When no input is received, no output is produced). Each vertex would produce some objective action-states as output, in terms of action-states in the level below. The hierarchical encoding of the world would be undone incrementally by each level.
Cortex Layer 6
However, since cortex layer 5 seems to be the direct driver of motor actions, it may be that layer 6 somehow controls cortex layer 5 in the same or lower regions, perhaps via some negotiation with the Thalamus.
![]() |
Adapted from Numenta CLA Whitepaper by Gideon Kowadlo |
There is some evidence that dopaminergic neurones in the Striatum are involved in agency learning, but this doesn’t necessarily refute this post, because this process may modulate cortical activity via the Thalamus. Cortex layer 6 may still require some form of optimization to ensure that higher hierarchy levels have agency over future action-states.
To conclude: This is all speculation – comments welcome!
useful info! thnks