P values, the 'gold standard' of statistical validity. Or not... http://t.co/DnqZ0VDUzC pic.twitter.com/DJbOXXTsr7
— Nature News&Comment (@NatureNews) 18. januar 2015
Scientific method: Statistical errors
QUOTE:
P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume.
QUOTE:
P values have always had critics. In their almost nine decades of existence, they have been likened to mosquitoes (annoying and impossible to swat away), the emperor's new clothes (fraught with obvious problems that everyone ignores) and the tool of a “sterile intellectual rake” who ravishes science but leaves it with no progeny3. One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.
The irony is that when UK statistician Ronald Fisher introduced the P value in the 1920s, he did not mean it to be a definitive test. He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: worthy of a second look. The idea was to run an experiment, then see if the results were consistent with what random chance might produce. Researchers would first set up a 'null hypothesis' that they wanted to disprove, such as there being no correlation or no difference between two groups. Next, they would play the devil's advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed. This probability was the P value. The smaller it was, suggested Fisher, the greater the likelihood that the straw-man null hypothesis was false.
No comments:
Post a Comment
As the Vulcans say: "I rejoice in our differences". Comments, questions and differing opinions are welcome.
Because of spam, comments are now moderated.