Yeah. I told you so?
Content lock-in is becoming ridiculous
So apparently I now need to hack my phone in order to transfer ringtones of my own songs that I performed to my own phone.
Verizon claims that copying ringtones on the SD card is not permitted because it would violate artists’ rights.
What artists? What rights?
I’m thinking of returning the phone.
Update: BitPim to the rescue!
Dissertation – Week 4
I’m playing catch-up from last week, due to the hefty machine learning workload I was given then. I’ve finished my 10 pages for this week, so the remainder is simply make-up work.
I’m done with the background, though I’ve just decided to add CUR and CANDECOMP to the mix. I’m performing the wavelet experiments now; with luck we will be able to apply tensors to them after this week is over. I also found a few problems with the way SVD is described in our grant proposal; I’ve made sure to avoid replicating those mistakes in my paper.
The paper is starting to get very… verbose. But I guess that’s to be expected after writing 30 pages on something that really doesn’t need it.
On the upside, I’m more than 1/5 of the way done, according to the number of pages I’ve written. Yay 🙂
Sometimes I miss being a programmer…
As I sit 30 pages deep into the maze of English, mathematics, and mathematical English (which is a language in its own right) that is my dissertation, I can’t help but reminisce about the days when I just used to code all day. It didn’t matter what I was writing; every project became a labor of love, though it was eked out in a battle for mastery against a mercilessly correct machine and the equally merciless ambiguities of the human mind. Receiving an interview feedback form from Google brought me back for a time, forced me to remember all of my victories – and defeats – as I tried to impart the thoughts that flitted through my mind at the interview.
I’ve spoken of my childhood already: of the early victory that was Metasquarer, of the elation and superlative mastery that breathed life into Final Aegis, and of the zero-sum victory in the PlanetSourceCode contest that firmly embedded a non-competition principle into my code of ethics.
My primary thoughts today did not trace over those paths so much as my more recent evolution as a programmer: the culmination of my long years of study, the final self-acknowledgment of mastery (I’m always the last one to), and the associated conclusion: it was no longer a challenge worthy of being my primary activity. The evolution of programming from the desktop to the web simply served to reinforce these concepts; “programmers” these days are more likely to use languages such as Javascript and HTML (which I still consider a markup rather than programming language) than C++ and Java. Fun as that is, that’s web development, and its practitioners tend not to understand either the elegance of – or need for – a good computer program. “Why compute squares on a board in O(n) when you can do it in O(n4) by scanning the whole board for each point?” sums this attitude up. “Computers are getting faster, so who will notice?” (well, you might if your program becomes popular and your server goes down in flames as the number of users grows). I even proposed a new paradigm that built classes bottom-up (by their behavior) instead of top-down (by their structure), which was promptly, since most people can’t see the point and prefer to work top-down (a study which I can no longer find concluded that despite top-down programming being encouraged and perceived as being more efficient, the best programmers tended to work bottom-up, which is true of the way I generally code as well, though I’ve become more amenable to top-down approaches as I’ve grown).
In the end, I just decided that I should move on from programming. So I decided to study algorithms in grad. school.
Well, fast forward through all of the application drama (the righteous indignation still hasn’t faded; it probably never will, since my entire life plan was essentially derailed and had to be rebuilt) and I am now at Temple studying biomedical data mining, and the last people I want to work with are the ones who study algorithms. I’ve never met such an unhappy yet demanding group of people in my life. Instead of focusing my efforts on programming, I am now focusing them on… well, everything, but especially research at the moment. I still code enough to keep my skills sharp, but only in support of my other activities. Coding for the sake of coding has been lost.
It’s something I miss from time to time, but it almost seems as if the world itself has moved past the need when I wasn’t looking – or perhaps I’m now content to describe the solution without expending the effort of implementation, since I know no one will bother with it anyway. Whatever the reason, I sometimes feel orphaned from the first thing I was really really good at.
I’m thinking about taking a job that primarily involves programming when I graduate. I started the doctorate with the notion that I was doing it more for the training than the degree, and I meant it, but I badly misjudged the research community and thus I now spend most of my time writing about concepts that anyone who cared could find in a textbook, just so I can present my new idea while meeting some sort of expected page limit (they call it “scope”) on my dissertation. I don’t know if I want to deal with this for the rest of my life. I love coming up with new ideas, but… there’s so much meaningless work that accompanies it! So much bureaucracy, so much conformity, even some hypocrisy… just to maintain a job that isn’t even particularly rewarding to begin with. I love research, but I can’t stand the way research is practiced, while I also love programming and can at least tolerate the way programming is practiced.
The idea of taking an easy job and doing my research independently looks more and more intriguing…
Manifold learning in AI
Manifold learning techniques such as SDE have the ability to extract data from a high dimensional space and describe it in terms of its degrees of freedom. Thus low level concepts such as “collection of pixels” become integrated into higher-level concepts such as “teapot rotated at this angle”.
In other words, this is how you teach a system abstraction. Thus, use of something of this nature may be a necessary component of an artificially intelligent system. The only problem is that current methods may be computationally infeasible for this use. Of course, approximation would be a good idea here.
Notation decision
I am going to use vectors to indicate tensor dimensionality. Introducing my own notation would have been messy.
"Fit", "Broader Impacts"
The world would be a much better place if everyone stopped worrying about whether ideas “fit” the purposes of their specific organization / community and simply accepted them on their perceived merit (again, my philosophy holds that the absolute merit of an idea is inestimable).
And I’d love it if I could stop having to explain how my theoretical computer science research helps every minority under the sun (but not white males; that’s taboo). First of all, it’s very difficult to explain how developing a streaming kernel PCA algorithm helps starving children in Africa. Second, my research ultimately helps everyone (by adding to knowledge, which can then be used in all sorts of ways) or no one (if nothing is ever developed on them). Which is the case depends entirely on how these ideas are used.
If these things are more important than the quality of the research, it’s no wonder the USA is losing its technical edge!
The Researcher's Golden Rule
“It always needs more study” 🙂
Streaming Semidefinite Embedding
I’m posting this just for the purpose of timestamping. Today I proposed an idea to stream kernel learning techniques such as semidefinite embedding. The trick is to pass minors one row and column at a time (actually, the matrix is symmetric, so just one row) and update using incremental kernel PCA. This results in an algorithm that only needs to store N elements in memory at a time rather than N^2.
Finer-tuning the Strong Anthropic Principle
The strong anthropic principle states that any viable universe must have the capacity for observation; that is, life must evolve in it. If the multi-worlds interpretation is correct, however, we may not all necessarily be in the same universe (if “quantum immortality” is correct, it gets even weirder: people might be in the same universe at one moment in time and may forever diverge at a future branching point as their own survival takes them along different paths).
I wonder whether we can propose a “strongest anthropic principle” of some sort that roughly states that the particular universe each person (or lifeform in general) inhabits evolved specifically for that person/organism rather than for the general existence of life as a whole. The existence of multiple universes would permit it.
We could even take it further, actually, and permit free will under the assumption of determinism (disclaimer: I am not a determinist), though this treads dangerous philosophical and theological ground because it would essentially argue that we are God: if the initial state of the universe is organized for a specific organism, it may be organized for the organism’s free will, or even by the organism’s free will.
Not that I believe this, but the ideas are intriguing. It’s the ultimate philosophy of egocentrism 🙂