Category Archives: Research

Conductive aerogel anodes?

Upon reading this article:
http://www.gizmag.com/tin-whisker-battery-anode/22905/

One of the first thoughts which struck me was “a conductive aerogel (e.g. carbon aerogel) would likely extend life even further”. Aerogels ordinarily have very high electrical resistance (because they’re mostly air), but certain types can be made to conduct, and they seem like the perfect material for creating compact supercapacitors.

Computer-Assisted Diagnosis is a bit strange

In that we are taking techniques which were developed to accommodate our human desire to deal with problems by looking at them, then trying to train computers, which have no innate sense of vision and which could be using any type of sensor imaginable, to understand visual data designed for human use.

…Just a thought.

If my diagnostic company succeeds, I do plan to pursue research into new non-visual sensing technology which is more appropriate for computerized detection. The future of diagnostics is digital.

Diaphragm FES for ALS

Though restoring control of every muscle in the body to patients with ALS is far too invasive for current technology to suffice, electrodes on the motor strip and pons, connected to receivers in the lungs and diaphragm seem like they would suffice to prevent further deaths from the disease. And perhaps even tracheotomies.

Is there some reason I’m missing that this wouldn’t work? Because if I can’t find one, I may very well partner with a neurologist and pursue it. I certainly have the neuroinformatics and signal processing backgrounds; I know that this scope and granularity of FES is currently well within the range of what is technologically possible.

Publication bias

There has been recent talk about a consistent finding of scientific studies initially overstating effects, which become weaker and weaker with additional scrutiny. This to me is an artifact of both the difficulty of designing a truly unbiased experiment and the publication process. Mind you, there’s no foul play at work here; science is just a little less “free to explore” than most people believe it to be. But the insistence that results always be “good” to be worthwhile ultimately harms its objectivity.

Statistical methods have become very good at reducing variance, and most researchers make a large concern out of achieving adequate sample sizes and creating complex statistical models in an attempt not only to demonstrate an effect, but to demonstrate that it cannot possibly be due to chance (more accurately, we think it’s not more than 5% likely, anyway :))

Unfortunately, while variance is usually fairly easy to detect and deal with, there is another contributor to statistical error: bias. Each experimental setup introduces its own bias. If we assume that the biases of independent experiments are roughly random (i.e. the biases are unbiased :)), then we would expect a possible over or understatement of an effect in the beginning, with a gradual regression to its true prevalence as additional models are “averaged into” the literature.

But the biases which have been observed are positive only. Effects are found to be *stronger* initially rather than weaker.

Here is where publication bias enters the scene: it is acceptable to publish *new* work only if the results of that work are “good”, whatever that may mean in a given field, and experimental parameters and models will be adjusted until such results are achieved (resulting in more than a few cases of statistical overfitting, I’m sure). Negative results are not published; they are either improved or abandoned.

But while this is true for pioneering work, it is NOT true for subsequent “reviews” of said work, which can be published simply on the basis of picking apart an existing effect.

This virtually guarantees that the initial bias will be an overstatement, and the subsequent direction with more study will push it in the negative direction, presumably until – eventually – the true effect is approximated.

Data Classification Based on the Immune System

Idea: a data classification metamodel based on the immune system: train a small bag of classifiers and clone the ones that perform well, but with a small chance of random mutations to the hyperparameters. Weight classifiers created in this manner exponentially based on iterations since last correct classification. Keep a “memory threshold” below which the weight will not fall in case that pattern is encountered again.

k-nearest neighbors and the separation of powers

Decisions using the kNN framework are arrived at through a majority vote of an observation’s k nearest neighbors (given some distance metric). When aggregating many kNN decisions and weighing them against one much more important kNN decision, one strategy I’ve found to work well is to copy congress:

The critical neighbor is “the President” and can’t “pass” the vote, but can “veto” it.
A decision is made to “pass” either on the vote of a majority of the neighbors in the absence of a veto, or given a 2/3 majority in its presence.

One example of this is aggregating decisions over a market index. Each individual asset in the index has an impact in its overall movement, but the index itself (the President) can also be analyzed directly.

Cluster Validity as a Feature in Spam Classification

Many spam mails that land in my inbox tend to be thematically similar, though the messages have slight variations (perhaps they’re being sent by the same spammer). Ordinary messages do not cluster so well. Clusters formed on these spam messages should thus be “tighter” than clusters to which ordinary messages belong. Cluster membership and validity may thus be used as a feature in subsequent spam classification.

How to Become a Successful Researcher Without Thinking (Depressingly Accurate Parody)

Ph. D. Comics
–PhD Comics.

This is a 12-step guide for all of the researchers permanently stuck in primary integration out there. Here’s how to succeed without the obligation of forming an authentic personality:

1. Look at new papers to figure out what’s about to become hot.
2. Apply the standard techniques in this field to a new or understudied domain.
3. Find an eager young grad. student/fellow with ideas about a dissertation topic/research project.
4. Ignore said student’s ideas, unload your project onto him.
5. Eventually he will hit a roadblock that he can’t seem to get around. Tell him that what he is trying to do is impossible. (Otherwise, you’ll need to learn the subject enough to give him advice, which requires thinking).
6. A few weeks later, he’ll come back with a finished method. Tell him to write a paper on it. Stick your name on the paper. Tell him to keep going.
7. Once the method is complete, the student will start writing. Reviewing the drafts takes thought, so just ignore them.
8. If at any point the student gets close to completion, ask him some stock questions to keep him busy and tell him he needs to stay longer. (Warning signs: drafts exceed 100 pages, complete framework built around the new technique, work begins to be applied in actual practical applications, student gets restless, or job/marriage obligations arise…)
9. Repeat until grad. student suffers a nervous breakdown.
10. Copy and paste half of his draft into a grant application (do save them, even if you don’t read them – one needs material to get funded).
11. Recruit new student.
12. Repeat from step 1 until dead.