On the originality of theses:
http://www.phdcomics.com/comics/archive/phd100307s.gif
On the originality of theses:
http://www.phdcomics.com/comics/archive/phd100307s.gif
I just read an article on Slashdot about the “power of algorithms” (really about involving both humans and machine learning in solving problems, which is a far cry from theoretical study of algorithms, but the article just demonstrates that most people can’t distinguish between the two). My first thought is that the growing awareness that the article summarizes arrived approximately a year and a half too late for me, as the root cause of my problems with graduate schools (both regarding admissions and independent learning opportunities) last year seems to stem from my choice of algorithms as a (“cold”) field of study.
The tide of increased awareness arrives at a perfect time for me to obtain a very cushy job in industry after obtaining my doctorate. Unfortunately, this is (a) not going to be a problem for me in any case, and (b) not really my goal. I want to do good science and creatively solve the big problems. Unfortunately, despite demonstrating considerable talent in the areas I want to go into (of course; I wouldn’t choose them otherwise), I haven’t been able to receive the slightest modicum of institutional training in those fields. In fact, I’ve met active resistance while attempting to procure such training. Autodidacticism is good, but it only goes so far, especially for a verbal learner such as myself.
And I still maintain that the research community is slighting a very powerful field that still has much room to expand.
First, happy Autumnal Equinox and Yom Kippur. If you are fasting, may your fast be an easy one.
On a more personal note, I’ve decided to go into industry post-graduation, which means I will most likely disregard my advisor’s suggestions, tempting as they might be, of taking up a postdoc at CMU. I’ve had a bias for a while, but it’s now definite, and only very strange circumstances will alter my decision. Part of it was more passive recruitment, this time from GE, Exxon-Mobil, and AT&T – all great companies, all great jobs, the credentials that set me apart (in particular, the fact that I had the highest GPA in my class) are taken into account in the hiring process, and I probably don’t need to worry that I won’t pass the interviews if I demonstrate programming and analytical abilities but can’t recite the CYK algorithm from memory (ala Google, which would have been an excellent place to work, and to continue my own research, if it had worked out).
Again, we need to contrast this with the academic response to the same credentials (plus a perfect GRE score, four glowing letters of recommendation, and a model personal statement that the companies didn’t even see!), where rejection from all of the schools that could have given me a real education (for the first time since 4th grade) forced me into Philadelphia, a city I loathe so deeply that I structure my entire schedule to minimize my time there (conflicting frequently with my advisor, who attempts to maximize my time in the lab, which also has the side effect of minimizing my productive time by jamming a 3+ hour daily commute into the works). It’s something that still burns within me, for the effects of the decision are permanent: I’m not receiving earnest scientific training, I’m not working in theory, my efforts to acquire proficiency in other fields are being severely and quite deliberately suppressed, I still haven’t defined my relative proficiencies or limits (or even learned study habits!) because I’m still effortlessly outstripping my class, and on top of it all I’m being pushed into traveling for 3 hours to one of my least favorite places on the planet at every available opportunity. I don’t know if I’ll ever be able to get over it; it’s completely altered the course of my life, my career, and my scientific output, and not in a good way. This is why I can’t even talk about it without the discussion degenerating into a rant. It was just absolutely unjust for someone with my proven ability and purity of intent to be denied a high-quality education, essentially left to wither, on the crux of his scientific career.
It’s not just a bias against past slightings, however. I’d put those aside in terms of career but for one thing, which I am noticing more with every passing day: the career of a scientist is socially constructed. You are nothing if you are not revered by your peers. One of my own theories (my 19th psychological postulate) states that a caste’s attitude towards you is a reflection of its own social values; an immediate corollary is that entire sectors will have rather consistent attitudes towards you over time. I wouldn’t formulate it if I didn’t believe it; my past reception by both sectors is a reflection of the direction my career will take.
And then there’s the research environment. I firmly believe that, for a theoretician, the lab is the last place to go to do research. Einstein had it right – stay in the patent office and devote your mental resources to your theories. Communication is a distraction. Travel is a distraction. The unfamiliar environment is (initially) a distraction. The lack of tools can be a distraction. Relying upon other people can be a BIG distraction if you’re not fortunate enough to end up with people who can be bothered to pull their own weight (a recent journal paper was completed in January, save for one experimental result which I must rely on the UPenn people on our team to produce, as they have the classifier that can produce it. Guess what? It still hasn’t been published!)
Here’s one that most people miss: devoting long stretches of contiguous time to research is a distraction! Think too much about a single approach for too long and you become entrenched in one mode of thought. Again, we have a historical precedent of the effectiveness of just thinking about other things, this time in Edison.
So yes, I do believe that industry would give me a better environment than academia. It also provides a useful fallback option, in the form of a development position, in case I become so disgusted with the way research is done that I decide to leave it altogether.
I hypothesized that most executives have names near the beginning of the alphabet. I decided to check this hypothesis on Google’s list of executives (sample size 42; I left out the board of directors), ran a simple linear regression analysis, and found a trendline:
y=-.106x+3.089
Where x=1 signifies ‘A’, x=2 signifies ‘B’, etc. Given that the maximum bin range (one bin per alphabetic character) was 4 and the domain contains 26 variables, this is a decently significant trend. (This is a cursory analysis; I’m not doing anything particularly powerful, so pardon the lack of t-tests and other heavy-duty analysis techniques).
There were 27 executives with last names in the range A-L and 16 with last names in the range M-Z. The graph was trimodal, with peaks in C-D, L-N, and S-T.
Writing begins now. I don’t want to disclose my topic just yet, as it’s a fairly new area and thus one that I can do some significant things in before the low hanging fruit is all picked, but I will say that it involves tensors 🙂
There’s an idea that simply won’t leave me alone – coupling largely deterministic algorithms with machine learning capabilities in order to guide behavior (but not completely determine it, as with genetic algorithms). For example, if we want to search for objects with a color attribute of red and we know that red clusters with blue through ML techniques, we can do a general binary search on a blue (or red) object and focus there. In this way, we can give an ordering to data that may not necessarily have as natural an ordering as an ordinal number. If the ML doesn’t dominate the algorithm, this could potentially speed up the creation of indices and search operations, among other things.
Just a thought.
I think I misunderstood the reason I was being asked to show up frequently in the lab. I think it isn’t so much about face-to-face communication (which there never was much of even when I was there), but rather about being seen there by whoever apportions lab space. Still, it’s a high price to ask when every day that I commute loses at least 3 hours of productive time… and an even higher one to ask of a naturalist amidst one of the most urban environments in the country.
Well, this is great. A quick search reveals that just about everyone coming out of Temple’s music program is writing atonal, or at best weakly tonal, music, which explains a lot of the pressure I’ve encountered (and resisted) to abandon the concepts of tonality. If I can’t write tonal music, I don’t want to write music at all.
Again, I am getting the feeling that I simply don’t belong in this school. I can’t study algorithms, I can’t study mathematics, and I can’t study tonal music composition. Of course, I could have studied all three at any of the schools I didn’t get into. Unfortunately, apparently only the ivies recognize the value of timeless instruction (more likely they just have enough prestige to get funding in even fields that are not “hot” at the moment). Everyone else just seems to want to chase after fads. The only consolation is that I am interested in biology as well, which is something I can study here for once.
Monmouth opened up possibilities, but all Temple has done is close them thus far.
Why do I need to write 2-page (or longer, usually much longer) papers on ideas I can summarize in three words? For example, say I discovered a new Mersenne prime (not something I would waste my time on!) and wanted to publish a paper on it. What would it look like?
Well, the meat of the paper can be summarized in three words: “2^x-1 is prime”. However, you’d get some long treatise on what Mersenne primes are, how they were discovered, the connection with even perfect numbers, the current role of GIMPS, applications to cryptography, and future work (finding even larger primes, duh!) Somewhere about halfway through the paper, I’d mention that I found a new prime using those same three words (only after explaining how I set the software up, conducted the search, etc.) If I wrote another paper up on the same topic, I’d just spit out the same information again, but with different wording.
This is one of the root causes of problems in science today.
Recent research has attempted to mathematically quantify the probability that we are running in a computer simulation. The arguments made are plausible, but the researcher ultimately just guesses a 20% probability based on his “gut instinct”, which is hardly scientific. Since I am a staunch nondeterminist, it would take more than this to convince me. However, let’s assume the researcher is correct for the sake of argument. For a culture to conceive of a computer powerful enough to simulate not only the structure of the universe, but also the intelligence of the “NPCs” (that is, us), it would need to be quite advanced. Since any culture that advanced must have a significant measure of intelligence, it also stands that at least some among them are themselves nondeterminists (from a simply philosophical point of view, having nothing to do with one’s understanding of physics). Thus, paradoxically, we would expect the world to be more random than it is. If the simulation in question were some sort of game, as some have postulated, I would expect that it would also be more epic (though this would very heavily depend on what is considered entertainment by whoever is doing the simulating).
Additionally, if a 20% probability exists that we are in a simulated universe, what of the one doing the simulating? Since the researchers assume nothing about the actual structure of the universe, that means that universe should also have a 20% chance of being simulated. As the number of universes approaches infinity, the probability that they are all simulated approaches 0, but the problem would become computationally intractable by even the most advanced computers long before that happened (regardless of the power of a computer that a civilization could build, each successive simulation would require a more powerful computer, as it would need to emulate the capabilities of the computers inside of the simulation). Thus the researcher’s assumption that it is highly probable that a computer powerful enough to simulate the universe exists may be flawed.
Finally, there is the question of actual theoretical intractability, which is probably my strongest counterargument. Regardless of the laws of physics, the laws of mathematics are largely independent of the universe we live in. Some of the processes within the universe are NP-complete or EXPTIME-complete, and do not lend themselves to solutions for large numbers of input even on the fastest of computers. When dealing with the scale of a universe, the numbers are immense, and the processes taking place would be simply impossible to simulate unless every atom was itself a computer. This is highly improbable in a simulation for a number of reasons, chief among them that the overhead of communicating between these systems to create a singe undivided universe would be greater than that of solving the problem in the first place and that from an engineering perspective, it would make more sense to centralize the operations of this system so it would be easier to manage and monitor.
That’s another thing – if this were a simulation, we would expect “backdoors” built into the universe. Even if we could not use these, we should expect evidence of their use by those simulating the universe. And since they’re running things, there is a chance that they’re keeping track of us, as intelligent beings (even if we’re not the subject of this simulation, which is quite possible) every so often, in which case we would expect them to have made contact already.