(Image © chiaink, http://chiarina.com/)
Imagine you have a very simple “knowledge memory” that stores knowledge as an associative array (or map) of ”key=>value” pairs. This memory supports the operators:
* get: retrieve a ”value” for a given ”key”.
* keyIterator: iterate over all keys present in the memory
If we want to learn new things about this data (i.e. to generalize) we need to inspect keys and values and try find interesting correlations, verify certain hypothesis, etc.
Example. We can store many person names and their age (name=>age), and draw conclusions (or at least hypothesis) such as that long names belong to older people, or names starting with “A” are out of fashion in the last 10 years, or whatever.
In order to do this we need to choose certain keys, check their values, come up with models or hypothesis, then get more keys and values, etc.
Doing this with an iterator seems awfully wasteful. We need to iterate over and over disregarding most of the keys to get to the keys that seem interesting for the current hypothesis being verified.
It seems that in order to learn from associative arrays we need first a good key sampler, one that can be biased in some configurable ways so that it yields interesting keys with high probability. What is interesting will depend on the aspect we are trying to learn at a given moment, the hypothesis we are trying to check, etc.
This sampler strikes me as similar to the human processes of dreaming. By this I mean that we browse (i.e. sample) the space of possible events of interest, wandering randomly back and forth before jumping to a new area and wandering some more. As we do this we keep retrieving values, checking how the memories stored behave at each location… Of course dreaming goes beyond this, but seems like an interesting crude model.
In the previous example, we would need to “dream up” names, initially at random, check their ages, find some interesting hypothesis, and then dream up some more that are in the vicinity of the hypothesis or correlation to check.
Note that without this ability to browse in an intelligent way the space of key, it seems hard to think of an even remotely efficient learning algorithm…
If we replace the associative array by an associative memory some things get better, but I think we still require dreaming. Without explicitly defining an associative memory, note that many instances of associative memories do not have a way to iterate over keys:
* a self-organizing (Kohonen) map
* a traditional map in which keys are fancy hashes of an original key, hashes that are locality preserving in some clever way but cannot be reversed.
* some human memory capacities also seem to lack a way to iterate over keys (e.g. try listing all words that you know in a given language).
In this case learning and generalization happens automatically by simply adding new patters, so no “dreaming” is necessary.
However intuitively it seems that dreaming is necessary here to re-learn the representation itself:
* prune and derive new features in a Kohonen map
* add new hashes, modify the hasing
* “make things click” in a human memory :)