Category Archives: Psychology

Not Cold at All

My peer group largely consists of people who are deemed “cold” or “aloof” in social interaction. I too have received this label several times. But what I am beginning to find is that these people, most of whom type INTP or INTJ on the MBTI, are actually the most sensitive, emotional people I’ve ever met. And, almost universally, they have been hurt again and again by society.

Why they confide in me is a bit of a mystery – I’ve been knocked around by society quite a bit myself – but for someone who is called “cold”, I too have an uncanny ability to empathize, so perhaps others are drawn to me for my ability to listen, support, and offer pragmatic advice at the same time.

I’m beginning to wonder whether the “cold” label really indicates distance, or whether it indicates a disinterest in irrelevant social mannerisms in favor of authentic interactions. As a rule, it seems almost universally levied at those least deserving of it.

And that is utterly insufferable.

MBTI types of generalists?

I’m a member of several groups on polymathy and on harnessing talent in multiple areas (it’s something I’m interested in myself, after all). One of the more interesting things I’ve noticed is that when asked about their MBTI types:

1. Everyone knows them already.

2. Almost everyone is an INTJ. Next common is INTP, then ENTP.

3. Most historical polymaths are thought to be INFJs. (What happened to cause the shift? Did thinking types suddenly become more in-tune with their artistic sides recently, or was there something cultural to it?)

How much faith one can put in MBTI types is questionable, but it does support my hypothesis that nonlinear intuitive thought, not straight logic, is required to see the connections between disciplines.

Crisis-free disintegration?

Here’s an interesting thought:

Why are developmental crises necessary for positive disintegration to occur? Is it intrinsic to the abandonment of stability; of securely-held and long-reinforced beliefs?

In a supportive enough environment, would it be possible to abandon such beliefs and ease into those created on one’s own with little to no conflict? Or must the whole process be fraught with internal anguish?

I wonder…

Aiming too low.

I just read about that “Stand Up to Cancer” initiative in Time magazine and realized that, while it introduces some things that people should have been doing from the beginning, other aspects are essentially more of the same in another guise.

The “dream team” concept is ok, although this again brings up the issue of selection accuracy and the goals as well as the abilities of the people you’re selecting. Putting these people together is phenomenal and could produce some exciting collaborations.

Going for cancers such as GBM and pancreatic is also a great idea, as these have been neglected and currently have very poor survival rates. These forms of cancer are essentially death sentences today, and treatments that can raise survival rates are long overdue.

The problem is one of metrics. Everyone wants only the best scientists to work on their projects. But how are these scientists going to be selected? Publication counts? Approval of their peers? “h-index”?

Whatever it is, it won’t be directly on the strength of their ideas. This is a pity, because the existing methods don’t work on the cancers you’re targeting (and they don’t work too well in general). Even the notion of a survival rate is absurd. Do people speak about survival rates for influenza? For the cold? Even for the black plague these days? No – because these diseases are either innately harmless or have been rendered harmless. It is highly unusual for people to die of them. Cancer isn’t like that – it’s innately harmful, and only very few cancers have actually been rendered harmless by medicine.

That brings me to a bigger mistake – one that Stand Up to Cancer makes in the same way that existing research programs do. Research scientists don’t get funding unless they have results. SUtC scientists won’t get funding unless they have a treatment. You’re calling it something else, but the bottom line is: you want to see an immediate return on your investment.

Cancer is a big problem, like energy independence. There is no immediate return on the investment, and if you try to make one, you’ll end up with “publish or perish” in a new form – tons of simple incremental advances which do nothing to revolutionize the field.

And that is tied in with the third, and largest, problem with this endeavor: no one is speaking of a “cure”. You all want to “increase” survival rates, not to render the concept obsolete. If you can get the 5 year survival rate of pancreatic cancer up from 3% to 6%, you’ll call it a victory and tout how much progress you’re making.

True, the other 3% will appreciate it. It’s worth it. But it’s short of the goal you need to look for.

If anyone over the age of 8, even a world renowned oncologist, were to speak of “curing cancer”, you would laugh at him. The entire scientific community would laugh at him. He wouldn’t find funding. His research endeavors would be doomed from the start.

And the bottom line is this: you have set the bar too low because you are collectively afraid of failure. You ridicule anyone who attempts to make an audacious advance, because it’s far easier to tout a string of minor successes.

But in the end, it’s that major advance that’s required to do away with this disease. And you’ll never find it if you’re averse to the very idea that a cure could exist.

One might be right under your nose, and you’d miss it. Imagine if Fleming had discovered Penicillin and, instead of remarking on its properties, shouted “preposterous!” and dumped it in the trash. (And that brings up another point: Fleming was another of what I call “near misses”, because, were it not for Chain, his work may have never attained publicity).

So if you want earnest results, start making earnest attempts. Be committed. Be bold. Give it 100% and don’t accept anything short of 100% as an end goal.

Cognitive dissonance is a root of poor decisions.

I’ve observed that the people whom I’ve interacted with tend to make poor decisions (defined as decisions having harmful outcomes and that could have been foreseen with the knowledge the person possessed at the time of making the decision) at a rate proportional to the amount of cognitive dissonance that they exhibit. The ones who have the most consistent beliefs and behaviors make the fewest mistakes. I haven’t run into anyone who was both consistent and a screwup yet, but that could be a result of my peer group (pretty much all of whom have completed a college education, which is hard to do if you consistently make poor decisions).

This fits in well with my theory of personal development as minimization of cognitive dissonance.

There is another important aspect of this that I am beginning to discover as well (and indeed, it springs from the only decision I’ve ever made that I consider objectively wrong, given my available options and knowledge): one cannot hope to understand another who has not undergone a similar degree of development. Their internal dissonance will manifest as poor decisions and attitudes which are utterly irrational to one who lacks such dissonance.

It took years for me to sort it out, and very few people are as introspective as I am. Perhaps this is why I’m very slow to make friends but greatly cherish the ones I have: none of them suffer from the burdens of irrationality that seem to plague the majority of the population to one degree or another. At a fundamental level, we understand each other because we are capable of constructing rational models of each other’s behavior – and we know that these models will be correct because they are consistent with our values.

This is the root of empathy. One cannot “see things through another’s eyes” unless an understanding exists. Any perceived understanding that lacks the root of such a model, conscious or unconscious, is a mere shadow of the true bond. Words are not a substitute and will not create an understanding where none can exist. In order for the phrase “I understand you” to be true, one must first understand “I” and “you” – in that order, for if your own behavior is irrational, you have no hope of ever constructing a rational model of another’s behavior.

It's working.

Despite trying to hold back, the students are still learning at about twice the speed of a normal class. They’re retaining it too.

Breadth first teaching and “stretching” works.

I overestimated them? Ha! The rest of you underestimated them!

There is vast potential inherent in all of us. Teaching is merely tapping it.

On not being able to work: yes, it's extraordinarily painful.

“Perhaps the most difficult thing for creative individuals to bear is the sense of loss and emptiness they experience when, for some reason, they cannot work. This is especially painful when a person feels his or her creativity drying out.” –From http://www.psychologytoday.com/articles/index.php?term=19960701-000033&page=4

This is so incredibly accurate that I had to post a link to this article based on that alone. Being prevented from following our ideas gives us an incredible amount of pain. It took me two years to get over it, and I still feel a bit resentful.

The rest of the article is rubbish because it consistently embraces both sides of each personality dichotomy. I’m all for eliminating false dichotomies, but I fail to see how one can be both introverted and extroverted, for example – the two are opposing conditions. Introverts gain energy from being alone and lose it with others. Extroverts gain it from being with others and lose it alone.

More artificial intuition ideas…

A post I just made on Slashdot in the context of an article about improving computer “Go” opponents:

Intuition is something a successful AI (and a successful human Go player) will require, and while we can model it on a computer, most people haven’t thought of doing so. Most systems are either based on symbolic logic, statistics, or reinforcement learning, all of which rely on deductive A->B style rules. You can build an intelligent system on that sort of reasoning, but not ONLY on that sort of reasoning (besides, that’s not the way that humans normally think either).

I suspect that what we need is something more akin to “clustering” of concepts, in which retrieval of one concept invokes others that are nearby in “thought-space”. The system should then try to merge the clusters of different concepts it thinks of, resulting in the sort of fusion of ideas that characterizes intuition (in other words, the clusters are constantly growing). Since there is such a thing as statistical clustering, that may form a good foundation. Couple it with deductive logic and you should actually get a very powerful system.

I also suspect that some of the recent manifold learning techniques, particularly those involving kernel PCA, may play a part, as they replicate the concept of abstraction, another component of intuition, fairly well using statistics. Unfortunately, they tend to be computationally intense.

There are many steps that would need to be involved, none of them trivial, but no one said AI was easy:

1. Sense data.
2. Collect that data in a manageable form (categorize it using an ontology, maybe?)
3. Retrieve the x most recently accessed clusters pertaining to other properties of the concept you are reasoning about, as well as the cluster corresponding to the property being reasoned about itself (remembering everything is intractable, so the agent will primarily consider what it has been “mulling over” recently). For example, if we are trying to figure out whether a strawberry is a fruit, we would need to pull in clusters corresponding to “red things” and “seeded things” as well as the cluster corresponding to “fruits”.
4. Once a decision is made, grow the clusters. For example, if we decide that strawberries are fruits, we would look at other properties of strawberries and extend the “fruit” cluster to other things that have these properties. We might end up with the nonsymbolic equivalent of “all red objects with seeds are fruit” from doing that.

What I’ve described is an attempt to model what Jung calls “extroverted intuition” – intuition concerned with external concepts. Attempting to model introverted intuition – intuition concerned with internal models and ideas – is much harder, as it would require clustering the properties of the model itself, forming a “relation between relations” – a way that ideas are connected in the agent’s mental model.

But that’s for general AI, which I’m still not completely we’re ready for anyway. If you just want a stronger Go player, wait just a bit longer and it’ll be brute forced.