Category Archives: Ideas

Defiant: Free-verse

I think I may have just came up with the very distilled essence of secondary integration in the process of sorting my own feelings out:

Defiant

I swim against a prodigious current,
and I know that I must falter.
It draws me towards the fall,
the inexorable fatal plunge.

But each stroke I take,
I count a small victory,
a stand.
For my right to exist.
For the betterment of the world.
For those who came before me.

And for everyone,
who has ever screamed,
defiantly at the heavens:
“No! There is a better way!”

For them,
For us,
I swim against the current,
because I am right.

Ultra-modernism?

I’m a bit surprised no mainstream trend has taken music off of the 12 tone scale and into continuous frequency space yet. I guess even Schoenberg couldn’t stomach that 🙂

Don’t look at me; I’m not advocating it. I can’t even stand atonal music.

Rationalism vs. Intuition in the Context of Academia

I think I understand now why Einstein worked as a patent clerk (or perhaps why he discovered what he did by working as a patent clerk, since I think his reason was simple inability to find a job). It took me a while away from my own research because it was really something I needed to solve on my own, but I think I have it now.

This is aside from the issues I’ve already identified with modern academia (“hot fields”, funding, bureaucracy, “publish-or-perish”, closed-mindedness, etc. etc. etc., ad nauseum), which I’m not going into any further here.

Academia is dominated by a single paradigm: look at problems for a very long time and eventually find a solution. This is great except for the fact that it’s a novelty seeking approach that doesn’t have any novelty itself. Everyone solves problems this way, which means everyone likely follows similar thought paths. I’m willing to bet Einstein himself was aware of this, because he himself stated that idiocy was doing the same thing and expecting a different result (Update: People attribute this quote to him, but I don’t know whether it’s actually his).

Anyway, the emphasis is always on logic, never on imagination, but imagination and creativity, not rigid deduction or calculation, are the driving forces behind the best science. Something very revolutionary seems to have more in common with a work of art or music than a mathematical proof. You can’t simply deduce something like relativity, even if you could deduce something like the true mass-energy equivalence equation e = mc^2 / sqrt(1-v^2/c^2) (the one everyone learns to recognize is just the special case of rest velocity) once the framework was in place. It says a lot that the actual framework was made by people who framed their thoughts intuitively, like Einstein and Poincare. And I think we can say it’s axiomatic that there’s always a better mathematician than you*, so however much talent you have at making those deductions, someone else will almost certainly have more.

*Unless you’re the best, in which case, carry on.

Your discoveries then become a matter of luck and precedence: can you discover something before someone else does? If you have to even ask this, you’re probably not doing truly revolutionary science. It also tends to make academics “idea misers”, zealously guarding their ideas lest someone steal them or independently arrive at the same discovery. Sometimes very revolutionary things pop up independently at around the same time as well, but if you’re always racing for credit, your only accomplishments are the instances in which you finish first.

Now, I’m not saying incremental advances are bad. The majority of progress is made incrementally. But my guess is that most scientists do not start out aspiring to mediocrity. They seem to lose their ideals and acquire a drive towards small discoveries during their training. Since this is essentially what their training requires them to do to graduate, it’s not too surprising. To be honest, it’s the lesson that I refuse to learn and it sums up a very great deal of the conflict of ideals between myself and academia. The work I do for Temple is an example of such incremental work, but it is merely a temporary compromise for the purposes of finishing my degree. My ultimate scientific goals remain unaltered.

Anyway, this is the fundamental clash between the schools of rational and intuitive thought, with academia fairly far in the rationalist camp. Einstein found a job outside of academia, however, which probably made a big difference in his discoveries. There’s nothing special about being a patent clerk. It’s probably a fairly mind-numbing job for someone of Einstein’s talent.

And I think that may be the idea. By freeing his own mind to simply wander, Einstein wandered onto something big. Actually, he wandered onto quite a few big things, because his mind was working differently. He wasn’t calculating; he was daydreaming about riding on light beams. The ideas all bubbled up to the surface in 1905, but so many revolutionary ideas don’t hit at once – they almost certainly previously existed as subconscious notions which had yet to be fully developed, and thus the “Annus Mirabilis” was probably just the year when Einstein decided to formally write down everything he had already figured out, possibly months or years earlier.

That’s not to discount academia entirely – one has to differentiate problem solving activities with actual training, and surely Einstein could not have derived his formulas knowing nothing of physics. But to equate training with rigidity is the mistake many seem to make in academia.

Low to low, high to high

There’s an interesting philosophical theme emerging from my subconscious with each additional explanation of why polymathy is good: those with little vision, if able to succeed at everything they try, will find themselves becoming more and more entrenched within the system; bound to servitude. They wish nothing more than to have others decide for them, and so they are bound to that fate! Conversely, those with greater degrees of vision, if successful at their goals, will not only find themselves uplifted to a state of personal freedom, but will lift others to new heights, with the scope of the group determined by the extent of the vision.

It starts with the self, extends to other individuals, then to larger and larger groups, and finally culminates in a universal sense of… a sort of benevolent paternity, I suppose… for society itself. The only pitfall for those who have attained such heights is to ensure that benevolent paternity does not become outright responsibility for the direction society heads in.

I sort of hit at this in “In Defense of Arrogance”, but I just discussed self-efficacy there, which I am now thinking may only be part of a larger set of true leadership traits. A desire for individual freedom and a strong moral system predicated on the existence of universal ethical principles seems important too (you can’t be truly free unless you can appreciate what “freedom” means independent of social norms). Self-efficacy is important for persistence during the “doing” stage, but for successful leadership, “seeing” must accompany “doing”!

A corollary of this “propagation” from vision to freedom is that a system with high social mobility is desirable – otherwise, visionaries may never achieve the freedom to realize their visions. For leadership to develop to its fullest potential, people should stay out of the way.

Electrical stimulation of diaphragm for MND breathing problems?

Why stick tubes in people with ALS and other motor neurone diseases when you can use the equivalent of a pacemaker to cause the diaphragm to contract? The problem is with the nerves – the muscle still works fine (until it starts to waste for lack of innervation, anyway). For that matter, why don’t ventilators in general attempt to use the natural machinery of the body? I would think making use of the body itself would always be preferable to using an artificial alternative.

An extension to that previous cancer treatment idea…

I don’t know enough biology to know how feasible these ideas are (how I wish I could find training), but here goes anyway:

I had previously come up with an idea involving selecting out the cells believed to be most sensitive to chemotherapy or other treatments and strategically placing them in the center of existing tumors (in the hopes that the chemosensitive cells would displace the more resistant ones), then delivering a high dose of chemotherapy (or whatever treatment is being used). I have no idea whether this will actually work, but from what little I know of cancer biology, this seems reasonable. Unfortunately, it wouldn’t be completely effective because it depends on infiltration of specific tumors, which could also be removed surgically, for that matter (this is less invasive, however).

I just thought of a particularly nasty extension that we may or may not have the technology for today.

Here’s the basic premise: anything that outright kills cancer cells or prevents them from dividing is going to have a hard time succeeding because it’s fighting evolution (even the most effective treatment, surgical excision, does not outright kill cancer cells; it merely removes them from the body). This is presumably why tumors become resistant to chemotherapy following treatment.

First, the plausible idea: what if we could “tag” the cells we extracted with antigens before placing them? (Is this how traditional immunotherapy works? I was under the impression it relied more on chemical signals such as IL2). We could then insert them into tumors, perhaps in conjunction with IL2 to stimulate the immune system, and let it learn that the cells are not to be tolerated.

Ideally, this could be made genetic and the patient’s immune system could be suppressed for a time, to allow the tagged cell to infiltrate the tumor. Then the immunosuppressants would be withdrawn. Same principle as the first idea, except we’re actually inducing a weakness.

Next, the less plausible but seemingly more promising idea: what if we could infect the specific cells with a lysogenic virus that targets cancer cells? This will cause the cell to silently continue replicating, but with a viral plasmid. Using a lysogenic virus would be helpful here because there would be no chance for the other cells to adapt to counter it; upon initiation of the lytic cycle of the virus, the tumor would be inundated from its own infected cells, “taken by surprise”. Moreover, this need not be left to chance; we can manually initiate lytic cycles when most convenient (say, as a neoadjuvent therapy). This should theoretically cause a substantive die-off of the tumor, while preserving normal cells if the virus were specific enough.

Manifold Learning to derive symbolic rules

As I mentioned before, the way I see manifold learning is as a statistical model of the concept of abstraction. Give a manifold learning algorithm a dataset with rotating teapots and it’ll give you back a manifold with one parameter: the angle of rotation. It no longer compares voxels; it compares teapots, which is really cool.

So what would happen if we gave it symbolic rules? Encoding them meaningfully would be a non-trivial issue – I’m not certain it’s enough to use, say, Godel numbering – but if we could accomplish this, what would the output be? What do the rules describe, exactly? Mere symbols? Or is there some underlying concept behind the rules? Could the algorithm abstract away the symbols to get at the heart of these concepts?

Self-efficacy and range of effect

Today I came up with an interesting hypothesis: that the term known as self-efficacy (one’s perceived ability to accomplish tasks) determines the scale of what an individual will attempt to change. The reasoning is simple: when one perceives a clash between his own values and those of others, he invariably spends a certain amount of time, however brief, wondering “is this a problem with me or is this a problem with them?” Those with low self-efficacy (or low self-esteem in general) will conclude that the problem is with themselves, since their belief in their own ability is very weak. Those with higher self-efficacy will blame the practitioners within the system, believing that they are inaccurately expressing a concept that is fundamentally correct (the “if I did this, it would be better” effect). Those with the highest self-efficacy have such confidence in their own ability that they frame the clash as a problem with the system itself and, being very confident in their ability, set out to change it.

Therefore, lest you condemn those with high self-efficacy as being arrogant or pretentious, realize that this is the only way that society can ever advance. Were the world left only to those with low self-efficacy, humanity would no longer exist. What you deem arrogance is therefore virtuous behavior.

There is, however, a danger in having too much self-efficacy: this danger emerges when one begins to believe that the laws of reality, of logic, of causality, no longer apply to oneself. This results in attempting tasks that are inherently impossible, not because of the way society functions, but because their completion would cause a logical contradiction. Of course, we all undertake tasks such as these unknowingly; it becomes pathological when one begins to ignore the logical evidence against what one is attempting to do.

Certain levels of self-efficacy prompt introspection and/or criticism, which must be somehow resolved before one feels capable of taking on larger challenges. Successful resolution of these challenges (success being defined as resolution in a way that does not threaten the integrity of one’s self) causes self-efficacy to increase, as it provides evidence that one is doing “the right thing”.

Essentially, I believe that we can summarize this effect with the following figure:

Change and self efficacy

(Also available in SVG).