Manifold Learning to derive symbolic rules

As I mentioned before, the way I see manifold learning is as a statistical model of the concept of abstraction. Give a manifold learning algorithm a dataset with rotating teapots and it’ll give you back a manifold with one parameter: the angle of rotation. It no longer compares voxels; it compares teapots, which is really cool.

So what would happen if we gave it symbolic rules? Encoding them meaningfully would be a non-trivial issue – I’m not certain it’s enough to use, say, Godel numbering – but if we could accomplish this, what would the output be? What do the rules describe, exactly? Mere symbols? Or is there some underlying concept behind the rules? Could the algorithm abstract away the symbols to get at the heart of these concepts?

3 thoughts on “Manifold Learning to derive symbolic rules

  1. Michael Barnathan

    Monica,

    You have a very interesting approach to the problem on your site. I know that true optimality is an impossibility, but I think the pursuit of it is laudable in any case. Those are my own personal views, though; attempting to create an agent that will always produce the right answer is probably the most common mistake AI researchers make.

    I actually agree with you in that symbolic logic will not work (alone, anyway) to produce a strong AI, particularly because the rules are often input by humans. I’m not really after a strong AI at this point, since I think it would, in the best case scenario, turn the entire human race into couch potatoes. Why think when you have machines that can do a better job of it for you?

    It is still a fascinating field, though, so I’m considering the problem in isolation. The question I was pondering (and something I really should get back to studying) was simply whether it is possible to abstract away a collection of logical rules to discover some sort of commonality between them, and then how it is possible to represent that solution (it almost certainly would not be possible to integrate the solution into the existing system, since it occupies a higher level within the system).

    You mention that you don’t think statistics is necessary, but your site mentions nesting predictors. The only types of predictors I’m familiar with are symbolic, statistical, reinforcement, and evolutionary. What sorts of predictors are you using, if they don’t use symbolic logic or statistics? It looks like you’re considering a neural approach, but the underlying basis of a neural network is statistical (each neuron traditionally represents the solution to a logistic regression problem).

    P.S.: This sentence on your website: “This traditional hiding of the intuitive part and the joy of discovery makes Science look boring to outsiders.” is spot on! I never understood why hiding the intuition behind research was considered not only acceptable, but mandatory. The failed attempts have the potential to teach us more about the science than the end result.

    Thanks,
    Michael

    Reply
  2. Monica Anderson

    Well, I’ve been working on Artificial Intuition since about 2001. Of course, our definitions differ a bit :-). See my website or my video on Google Videos. But considering your tag line “Eradicating Suboptimality” you probably won’t like what you see.

    I agree with your first paragraph above, but not the second. If you want intuition, avoid symbols and formalisms, including logic. Oh, and you don’t need statistics either.

    – Monica

    PS Congratulations on incorporation of the Polymath Foundation. I approve and applaud, and I signed up for the mailing list.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *