One in a Thousand


Yesterday I saw a report on an experiment that we were conducting at the Khan Academy.  Students in the experimental group were becoming proficient at more exercises than students in the control group.  The effect was highly significant p < 0.001!  Hurray, the experiment worked, time to make a change, right?

Not so fast.  This was an A vs A test, which means that the experimental condition was exactly the same as the control condition.  There should not have been any difference at all.  So what went wrong?

Lets think about a p value < 0.001.  That means we are confident 1:1000 that the effect is real.  Big confidence right?  Maybe too good to be true?  How often are we this confident in life?  The answer is seldom.

How often would you take a 1:1000 to one sports bet that one team would beat another?  Maybe if the Harlem Globetrotters were playing, except even they lose 1.5% of their games.  Even really low performing teams usually win more than 1 in 1000 games.  It would have to be a very rare matchup to inspire such confidence.

What if I bet you that when you flip your light switch the light will turn on.  Easy bet to win, until the light bulb burns out.  And how many switches before a lightbulb burns out?  Well if you keep your light on for an hour at a time, an incandescent bulb will burn out after ~1200 switches.  And then there’s the chance of a brown out, circuit breaker trip, or rat gnawing on the wires.  A 1 to 1000 be might be reasonable, but just barely.

Now what if we were dealing with a teaching method.  Say I told bet you 1:1000 that Montessori schooling is better for students than traditional public schools.  Is it a fair bet?  How could you possibly have the expertise to judge so strongly?

If you see a p value that is highly significant you should get excited.  Not that things are working, but that something is wrong.  Wrong with your statistics, wrong with your experiment, wrong with you.  You should vet the process every which way in search of the source of over-confidence.  

In our case it was bad stats, a few renegade bots were driving up the proficiency count and our statistical test was brittle in the case of these outliers.  A non-parametric test would not have been confused.  But this is not an essay to extol the virtues of non-parametric tests.  It is an essay extolling the virtues of common sense, skepticism, and rational uncertainty.

No comments: