Dreaming is Important to Generalize Efficiently

(Image © chiaink, http://chiarina.com/)

 

Imagine you have a very simple “knowledge memory” that stores knowledge as an associative array (or map) of ”key=>value” pairs. This memory supports the operators:
get: retrieve a ”value” for a given ”key”.
keyIterator: iterate over all keys present in the memory

If we want to learn new things about this data (i.e. to generalize) we need to inspect keys and values and try find interesting correlations, verify certain hypothesis, etc.

Example. We can store many person names and their age (name=>age), and draw conclusions (or at least hypothesis) such as that long names belong to older people, or names starting with “A” are out of fashion in the last 10 years, or whatever.

In order to do this we need to choose certain keys, check their values, come up with models or hypothesis, then get more keys and values, etc.

Doing this with an iterator seems awfully wasteful. We need to iterate over and over disregarding most of the keys to get to the keys that seem interesting for the current hypothesis being verified.

It seems that in order to learn from associative arrays we need first a good key sampler, one that can be biased in some configurable ways so that it yields interesting keys with high probability. What is interesting will depend on the aspect we are trying to learn at a given moment, the hypothesis we are trying to check, etc.

This sampler strikes me as similar to the human processes of dreaming. By this I mean that we browse (i.e. sample) the space of possible events of interest, wandering randomly back and forth before jumping to a new area and wandering some more. As we do this we keep retrieving values, checking how the memories stored  behave at each location… Of course dreaming goes beyond this, but seems like an interesting crude model.

In the previous example, we would need to “dream up” names, initially at random, check their ages, find some interesting hypothesis, and then dream up some more that are in the vicinity of the hypothesis or correlation to check.

Note that without this ability to browse in an intelligent way the space of key, it seems hard to think of an even remotely efficient learning algorithm…

Associative Memories

If we replace the associative array by an associative memory some things get better, but I think we still require dreaming. Without explicitly defining an associative memory, note that many instances of associative memories do not have a way to iterate over keys:
* a self-organizing (Kohonen) map
* a traditional map in which keys are fancy hashes of an original key, hashes that are locality preserving in some clever way but cannot be reversed.
* some human memory capacities also seem to lack a way to iterate over keys (e.g. try listing all words that you know in a given language).

In this case learning and generalization happens automatically by simply adding new patters, so no “dreaming” is necessary.

However intuitively it seems that dreaming is necessary here to re-learn the representation itself:
* prune and derive new features in a Kohonen map
* add new hashes, modify the hasing
* “make things click” in a human memory :)

A Thousand Exponentially Smaller Hyper-Sausages: a Toy Model for Learning Certain Complex Problems

2012.142.1, 177754

(WARNING: this is a technical text about Machine Learning, not meant for the general public.)

I am presenting a problem here, for which I don’t have a reasonable solution; if you do, please comment. I have found this problem in several forms when applying machine learning in real web search and NLP applications, but only recently I was able to formulate it clearly as a toy data model. I hope this simplification can draw the attention of more theoretically inclined machine learning people who can help…

A Toy Data Model

First consider the following data model:

  • the input data distribution is a mixture of very many components (e.g. gaussians) with relative sizes decreasing very fast.
  • the output data is some non-linear function of the input data distribution (a regression problem), smooth within components but using different feature subsets in each components.

In other words we are tackling a single regression problem in a single input space, but with a rich structure: many loosely coupled sub-problems, with very uneven relative sizes. We can picture this as:

  • a thousand hyper-sausages, each exponentially smaller than the next, exponentially smaller than the next…
  • within each hyper-sausage the function to be learnt is nice and smooth and uses a small subset of features (i.e. dimensions), but these differ from component to component
  • (and of course we don’t know a priori how many sausages there are, or their shape or position, and these intersect)

Example: In Web Search Ranking learning, over 50% of queries are simple  navigational, for these you basically need features of in-link-ness and url match-ness. But then there are a smaller but important number of informational queries, for which you need fielded matching. A small fraction of these are proper name queries, for which you really need location or n-grams. A small fraction of these  are proper name queries with an ambiguous surname matching a geographical location… etc.

The Problem

Now consider the error of an approximation to this regression problem. This cost can be decomposed linearly in terms of the components. But because smaller components are much smaller than larger ones, their effect on overall error would be negligible, even with a large training set. Larger and larger amounts of labelled data will marginally improve the overall performance on the largest component, but nothing will be learnt about the smaller ones.

All your “learning bandwith” is spent improving the precision of the model on the first component, where little new can be learnt. Intuitively you need to sample in some stratified manner, to obtain data about the smaller components. However you have several practical problems:

  • you’ll need to discover the stratification scheme as you go,
  • you’ll need to deal with the fact you will no longer have an i.i.d. sample,
  • you’ll need to re-define your cost function to promote learning on small components in some reasonable way…

An alternative is perhaps using an exponentially increasing loss, so larger mistakes dominate the learning no matter how rare; however… but this is also tricky for very many reasons in practice, such as convergence rate, outliers…

Why this matters?

When you are in this type of situation, and you don’t tackle it head-on, you become entangled in a web of confusion, partial solutions and endless discussions.

First it kills creativity: whatever you do to improve over an interesting sub-problem (new interesting features, new cost-functions, whatever) will never improve things “on average” (in fact, will worsen things for the first component).

Second, it leads to a hundred  ad-hoc solutions full of dragons. Each engineer and researcher will go in a different direction, making all results incomparable and all discussions extremely long:

  • adding handpicked “difficult cases” to their training and testing samples,
  • over-sampling known problem cases (calling this bootstrapping, active learning, triage…)
  • inventing special loss functions (error metrics) that weight different thing differently
  • adding a battery of “validation” sets with different characteristics and choose models that behave reasonably well on them.

Worse, becomes a political problem. What is the relative importance of improving 10% on a small subproblem vs. improving 0.05% on average for all? Endless meaningless philosophical discussions follow, which in turn will spawn endless projects about sampling, validation, metrics…

 

 


 

[ Some Related Work:

]