Bayesian: I wrote pretty much the same Python program when I was first converting to Bayesianism and finding out about likelihood ratios and feeling skeptical about the system maybe being abusable in some way, and then a friend of mine found out about likelihood ratios and he wrote essentially the same program, also in Python\. And lo, he found that false evidence of 20:1 for the coin being 55% biased was found at least once, somewhere along the way\.\.\. 1\.4% of the time\. If you asked for more extreme likelihood ratios, the chances of finding them dropped off even faster\.
To be sure. Does this mean that the claim "We have observed 20 times against 1 that the coin is 55% biased" is only made 1.4% of the time?
If so, it seems like a lot…