Bayesian: I wrote pretty much the same Python program when I was first converting to Bayesianism and finding out about likelihood ratios and feeling skeptical about the system maybe being abusable in some way, and then a friend of mine found out about likelihood ratios and he wrote essentially the same program, also in Python\. And lo, he found that false evidence of 20:1 for the coin being 55% biased was found at least once, somewhere along the way\.\.\. 1\.4% of the time\. If you asked for more extreme likelihood ratios, the chances of finding them dropped off even faster\.
I'm going to take the role of the "undergrad" here and try to interpret this in the following way:
Given that a hypothesis is true -- but it is unknown to be true -- it is far more likely to come by a "statistically significant" result indicating it is wrong, than it is likely to come by a result indicating that another hypothesis is significantly more likely.
In simpler words - it is far easier to "prove" a true hypothesis is wrong by accident, than it is to "prove" that an alternative hypothesis is superior (a better estimator of reality) by accident.
Would you consider this interpretation accurate?