My position on biofuel was just vindicated:
http://seattletimes.nwsource.com/html/nationworld/2004171188_ethanol08.html
Hopefully now we’ll see more use of truly renewable energy, such as solar, rather than a search for the next new thing to burn.
My position on biofuel was just vindicated:
http://seattletimes.nwsource.com/html/nationworld/2004171188_ethanol08.html
Hopefully now we’ll see more use of truly renewable energy, such as solar, rather than a search for the next new thing to burn.
As I mentioned before, the way I see manifold learning is as a statistical model of the concept of abstraction. Give a manifold learning algorithm a dataset with rotating teapots and it’ll give you back a manifold with one parameter: the angle of rotation. It no longer compares voxels; it compares teapots, which is really cool.
So what would happen if we gave it symbolic rules? Encoding them meaningfully would be a non-trivial issue – I’m not certain it’s enough to use, say, Godel numbering – but if we could accomplish this, what would the output be? What do the rules describe, exactly? Mere symbols? Or is there some underlying concept behind the rules? Could the algorithm abstract away the symbols to get at the heart of these concepts?
Today’s idea is the use of ICA (being a blind source separation technique) for polyphonic musical analysis.
I’ll probably pursue it.
Two of the biologists we’re collaborating with at the lab brought up a good point today: we’re submitting papers that are written from the perspective of computer science to biomedical informatics conferences. No wonder we’re getting so many reviews in the neighborhood of “Why does this work?” when discussing topics that are assumed to be background! (Although I still don’t consider this a valid excuse to reject a paper; another’s lack of understanding is not a problem with my idea). I suppose we really should be submitting to different conferences, or including different sorts of information in the paper.
Using k-means clustering on spatial data requires linearization of the dataset. This causes pixels on the right of one row and on the left of the next to be considered neighbors in a traditional linearization scheme (using a Hilbert curve might be a better idea), which is inaccurate. This stems from the fact that observations are stored as a matrix: observations in the rows, features in the columns. Using tensors, it seems we can store the spatial information as well as any constituent features – the dimensionality of the dataset would then be the rank of the tensor – 1, with the final dimension reserved for features.
That would seem more efficient. Maybe I should see whether it has been done already.
My ISDE work may not be original. In fact, Weinberger himself appears to have derived an incremental extension to it. However, for some reason this work was never published.
I’m not completely sure whether this is truly an incremental algorithm, however. The code’s very messy and it’s difficult to untangle what’s going on, but it seems that the algorithm relies on information about the whole dataset, which must be passed to the function.
If that is the case, I may continue with my work. If not, I may have to cease working on it, or at least attempt to show that mine is somehow better (which it might not be).
After completion of fixed in-class coursework, the notion of “credits” becomes meaningless. One credit or twelve, I’m still doing the same research.
I am beginning to notice artificial impediment in my progress towards the Ph. D., since I am meeting no impediment inherent in the work. I have crossed the halfway mark on my dissertation, but now I’m running into issues with the department itself. I’m not sure to what extent I’ll tolerate this, but if I’m forced to stay here for more than one additional year (through whatever means used, such as putting me on a 4 credit schedule), I’ll likely drop out of the program immediately. It would be very easy; I have about 15 recruiters waiting to hear back from me. My service was given freely in exchange for scientific training. My dissertation is being written in exchange for a degree. If the terms are changed halfway, it is not I who have reneged.
In the course of researching for the working memory paper I’m about to submit, I had to read quite a bit on Baddeley’s model of working memory. Overall, it appears to be an example of a decent but incomplete model that, rather than admitting its incompleteness, was extended in such a way that it no longer resembles its former self. The components it proposed as working memory subsystems were split apart into definitively disjoint sub-subsystems that may not even share spatial locality in the brain, casting doubt on the experiments with brain-damaged patients that supported the original model in the first place! The addition of an episodic buffer also appears to render some of these sub-subsystems redundant.
Looking back on the review of a rejected paper we’re about to resubmit, it struck me that, judging by what the reviewers wrote, this paper should have been accepted. None of the criticisms actually address the research! They’re all petty matters, most involving the fact that we used manual segmentation and thus didn’t discuss automatic methods – but this is not a paper on segmentation; it’s on diagnosing galactographic features in unenhanced mammograms. One of the reviews even says it needs more citations (it has 8). There is, of course, no good reason to cite things that you did not use, but that’s what this is asking me to do.
With each new paper I write, my faith in science dies a bit. It’s not that it doesn’t work – it clearly does – but that all of the advancements you see around you are such an infinitesimal part of what would be possible if science were truly open-minded.
I think I might just take up a programming job when I graduate. I need to find some way to gain access to the scientific equipment I need if I’m to do that, however. I don’t know if I want to continue playing the publication game, but that doesn’t mean I’m going to stop thinking.
You don’t need Panidealism to realize that scientific censorship is bad. All you need is utility theory. Be as skeptical as you’d like personally when examining a theory, but if it has any plausibility whatsoever, don’t you dare censor it.
Here’s something that was recently published that may very well be a plausible cure for Alzheimer’s disease. It appears theoretically sound to someone in the same general field but lacking expert knowledge of the pathophysiology of Alzheimer’s (in this case, me – I do biomedical research on the human brain, but of a different type):
http://www.jneuroinflammation.com/content/5/1/2
Critics wonder why the work was accepted for publication, as it mainly focuses on results in treatment of a single subject. However, subsequent analysis has indicated that it works on other subjects as well:
http://www.sciencedaily.com/releases/2008/01/080109091102.htm
Let’s say that peer review makes a mistake of some sort. We have two potential outcomes:
False positive: Paper with unsound theory accepted. A few scientists spend a few months of effort subjecting it to experimentation. Empirical evidence refutes the theory. Perhaps a new treatment is designed based on the theoretical results of the original paper or on the experimental data gathered to refute it. Total waste is a bit of effort (and it’s not really all a waste).
False negative: A cure for Alzheimer’s never sees the light of day.
I think it’s fairly clear which mistake is worse.
From the perspective of utility theory, any time you censor an idea whose implications are not negated by a near-infinitesimal probability of truth, you lose.