My position on biofuel was just vindicated:
http://seattletimes.nwsource.com/html/nationworld/2004171188_ethanol08.html
Hopefully now we’ll see more use of truly renewable energy, such as solar, rather than a search for the next new thing to burn.
My position on biofuel was just vindicated:
http://seattletimes.nwsource.com/html/nationworld/2004171188_ethanol08.html
Hopefully now we’ll see more use of truly renewable energy, such as solar, rather than a search for the next new thing to burn.
As I mentioned before, the way I see manifold learning is as a statistical model of the concept of abstraction. Give a manifold learning algorithm a dataset with rotating teapots and it’ll give you back a manifold with one parameter: the angle of rotation. It no longer compares voxels; it compares teapots, which is really cool.
So what would happen if we gave it symbolic rules? Encoding them meaningfully would be a non-trivial issue – I’m not certain it’s enough to use, say, Godel numbering – but if we could accomplish this, what would the output be? What do the rules describe, exactly? Mere symbols? Or is there some underlying concept behind the rules? Could the algorithm abstract away the symbols to get at the heart of these concepts?
It’s funny how something as simple as overexcitability can shape one’s life so profoundly. It’s both a blessing and a curse. Life is harsh, and being able to experience it more keenly opens one up for a great deal of pain that most people will never need to go through. What makes it all worth it is that the intensity of the very struggle and of the heights one can strive for and attain. Even now, at the certifiably adult age of 23, this feeling has not dissipated: every day is an opportunity, to be ascribed a significance all its own – to learn, to experience, to create, to improve – or to waste if the day’s activities are meaningless.
Today I resumed my mathematical research. As I thought, I’m a much stronger mathematician now than I was just a few years ago due to my exposure to complex mathematics as a Ph. D. student (so I guess it wasn’t a complete waste). But how I had forgotten the landscape of possibilities that opens up before me with every new stroke of the pencil!
Today’s idea is the use of ICA (being a blind source separation technique) for polyphonic musical analysis.
I’ll probably pursue it.
Today I came up with an interesting hypothesis: that the term known as self-efficacy (one’s perceived ability to accomplish tasks) determines the scale of what an individual will attempt to change. The reasoning is simple: when one perceives a clash between his own values and those of others, he invariably spends a certain amount of time, however brief, wondering “is this a problem with me or is this a problem with them?” Those with low self-efficacy (or low self-esteem in general) will conclude that the problem is with themselves, since their belief in their own ability is very weak. Those with higher self-efficacy will blame the practitioners within the system, believing that they are inaccurately expressing a concept that is fundamentally correct (the “if I did this, it would be better” effect). Those with the highest self-efficacy have such confidence in their own ability that they frame the clash as a problem with the system itself and, being very confident in their ability, set out to change it.
Therefore, lest you condemn those with high self-efficacy as being arrogant or pretentious, realize that this is the only way that society can ever advance. Were the world left only to those with low self-efficacy, humanity would no longer exist. What you deem arrogance is therefore virtuous behavior.
There is, however, a danger in having too much self-efficacy: this danger emerges when one begins to believe that the laws of reality, of logic, of causality, no longer apply to oneself. This results in attempting tasks that are inherently impossible, not because of the way society functions, but because their completion would cause a logical contradiction. Of course, we all undertake tasks such as these unknowingly; it becomes pathological when one begins to ignore the logical evidence against what one is attempting to do.
Certain levels of self-efficacy prompt introspection and/or criticism, which must be somehow resolved before one feels capable of taking on larger challenges. Successful resolution of these challenges (success being defined as resolution in a way that does not threaten the integrity of one’s self) causes self-efficacy to increase, as it provides evidence that one is doing “the right thing”.
Essentially, I believe that we can summarize this effect with the following figure:

(Also available in SVG).
Upon hearing first of BCI (brain-computer interface) technology several years ago, my first thought was not “wouldn’t this be useful for medical patients?”, but rather “this holds an amazing amount of promise in the arts”. Everyone seems to be thinking about low level spatial movements – things on the order of controlling limbs with thoughts or – if they’re ambitious – moving pointers across a screen. None of them seem to be thinking thoughts such as “Can we decode artistic or musical ideas and represent them instantly on a computer?” Now that would be amazing.
It’s a significant research challenge, but the idea needs to be present before the research can begin.
Upon hearing the expression “I just want my piece of the pie”, I just thought how inappropriate this statement is. No one has the right to give me a piece of a pie if I’m the one who baked it.
Two of the biologists we’re collaborating with at the lab brought up a good point today: we’re submitting papers that are written from the perspective of computer science to biomedical informatics conferences. No wonder we’re getting so many reviews in the neighborhood of “Why does this work?” when discussing topics that are assumed to be background! (Although I still don’t consider this a valid excuse to reject a paper; another’s lack of understanding is not a problem with my idea). I suppose we really should be submitting to different conferences, or including different sorts of information in the paper.
No. 21 is the Waldstein. No. 23 is the Appassionata. No. 26 is Les Adieux. No. 29 is the Hammerklavier. And no. 27 just sounds good 🙂
Beethoven had some very nice earlier sonatas (no. 1, no. 8, no. 14, no. 17, to name a few), but nothing matches the depth of the 20s.
Using k-means clustering on spatial data requires linearization of the dataset. This causes pixels on the right of one row and on the left of the next to be considered neighbors in a traditional linearization scheme (using a Hilbert curve might be a better idea), which is inaccurate. This stems from the fact that observations are stored as a matrix: observations in the rows, features in the columns. Using tensors, it seems we can store the spatial information as well as any constituent features – the dimensionality of the dataset would then be the rank of the tensor – 1, with the final dimension reserved for features.
That would seem more efficient. Maybe I should see whether it has been done already.