A post I just made on Slashdot in the context of an article about improving computer “Go” opponents:
Intuition is something a successful AI (and a successful human Go player) will require, and while we can model it on a computer, most people haven’t thought of doing so. Most systems are either based on symbolic logic, statistics, or reinforcement learning, all of which rely on deductive A->B style rules. You can build an intelligent system on that sort of reasoning, but not ONLY on that sort of reasoning (besides, that’s not the way that humans normally think either).
I suspect that what we need is something more akin to “clustering” of concepts, in which retrieval of one concept invokes others that are nearby in “thought-space”. The system should then try to merge the clusters of different concepts it thinks of, resulting in the sort of fusion of ideas that characterizes intuition (in other words, the clusters are constantly growing). Since there is such a thing as statistical clustering, that may form a good foundation. Couple it with deductive logic and you should actually get a very powerful system.
I also suspect that some of the recent manifold learning techniques, particularly those involving kernel PCA, may play a part, as they replicate the concept of abstraction, another component of intuition, fairly well using statistics. Unfortunately, they tend to be computationally intense.
There are many steps that would need to be involved, none of them trivial, but no one said AI was easy:
1. Sense data.
2. Collect that data in a manageable form (categorize it using an ontology, maybe?)
3. Retrieve the x most recently accessed clusters pertaining to other properties of the concept you are reasoning about, as well as the cluster corresponding to the property being reasoned about itself (remembering everything is intractable, so the agent will primarily consider what it has been “mulling over” recently). For example, if we are trying to figure out whether a strawberry is a fruit, we would need to pull in clusters corresponding to “red things” and “seeded things” as well as the cluster corresponding to “fruits”.
4. Once a decision is made, grow the clusters. For example, if we decide that strawberries are fruits, we would look at other properties of strawberries and extend the “fruit” cluster to other things that have these properties. We might end up with the nonsymbolic equivalent of “all red objects with seeds are fruit” from doing that.
What I’ve described is an attempt to model what Jung calls “extroverted intuition” – intuition concerned with external concepts. Attempting to model introverted intuition – intuition concerned with internal models and ideas – is much harder, as it would require clustering the properties of the model itself, forming a “relation between relations” – a way that ideas are connected in the agent’s mental model.
But that’s for general AI, which I’m still not completely we’re ready for anyway. If you just want a stronger Go player, wait just a bit longer and it’ll be brute forced.