[visionlist] Statistical Criticism is Easy; I Need to Remember That Real People are Involved

Brad Wyble bwyble at gmail.com
Mon Nov 20 13:35:23 -05 2017


That's a very good point Ian. There is an infinite number of hypotheses
that provide a perfect fit to any pattern of data.

On Mon, Nov 20, 2017 at 4:46 AM, JERMYN, IAN H. <i.h.jermyn at durham.ac.uk>
wrote:

> Hi Todd,
>
> I hope it is OK for me to comment on this thread as a bit of an outsider.
>
> > What I meant to say was that a p-value tells you how likely your data
> are given the null hypothesis, it doesn’t really say anything about the
> probability of the null hypothesis. So a SMALL p-value means that my data
> are unlikely given the null hypothesis, and a LARGE p-value means my data
> are likely given the null-hypothesis... but they could be even more
> compatible with some other hypothesis!
>
> That is a nice summary. I would narrow it down even further: the p-value
> tells you the probability under the null hypothesis of getting the value of
> your chosen test statistic on the data, or any greater value (values which
> of course do not correspond to your data).
>
> The data are always more compatible with some other hypothesis; the
> question is whether they are compatible with some other plausible
> hypothesis; but then perhaps we should weight these hypotheses according to
> their plausibility...starts to sound familiar...
>
> Ian.
>
>
>
> --------------
>
> Ian H. Jermyn
>
> E: i.h.jermyn at durham.ac.uk
>
> --------------
>
> Department of Mathematical Sciences
>
> Durham University
>
> Science Laboratories
>
> South Road
>
> Durham DH1 3LE
>
> United Kingdom
>
>
> ------------------------------
> *From:* visionlist <visionlist-bounces at visionscience.com> on behalf of
> Horowitz, Todd (NIH/NCI) [E] <todd.horowitz at nih.gov>
> *Sent:* 16 November 2017 20:45
> *To:* visionlist at visionscience.com
>
> *Subject:* Re: [visionlist] Statistical Criticism is Easy; I Need to
> Remember That Real People are Involved
>
>
> Oops!
>
>
>
> What I meant to say was that a p-value tells you how likely your data are
> given the null hypothesis, it doesn’t really say anything about the
> probability of the null hypothesis. So a SMALL p-value means that my data
> are unlikely given the null hypothesis, and a LARGE p-value means my data
> are likely given the null-hypothesis... but they could be even more
> compatible with some other hypothesis!
>
>
>
> thanks
>
> Todd
>
>
>
> *From:* Horowitz, Todd (NIH/NCI) [E] [mailto:todd.horowitz at nih.gov]
> *Sent:* Thursday, November 16, 2017 2:00 PM
> *To:* Pam Pallett <ppallett at gmail.com>; visionlist at visionscience.com
> *Subject:* Re: [visionlist] Statistical Criticism is Easy; I Need to
> Remember That Real People are Involved
>
>
>
> I thought that a a large p-value simply meant that my data were unlikely
> given the null-hypothesis, a statement which yields no evidence about
> either the null- or alternative hypotheses.
>
>
>
> *From:* Pam Pallett [mailto:ppallett at gmail.com <ppallett at gmail.com>]
> *Sent:* Thursday, November 16, 2017 10:29 AM
> *To:* visionlist at visionscience.com
> *Subject:* [visionlist] Statistical Criticism is Easy; I Need to Remember
> That Real People are Involved
>
>
>
> Hi All,
>
>
>
> I came across a blog today by Frank Harrell, Professor of Biostatistics
> and Founding Chair at Vanderbilt.  His most recent post is the title of
> this email.  But as I'm reading through his blog, I'm hearing a lot that
> has been discussed and experienced by professors and postdocs subscribed to
> this list.  We are often very separated from our neighboring departments,
> and I actually found some comfort in the fact that these problems seem
> spread across the board (misery loves company). Even if we have been
> echoing these problems for over a decade with little effective change.
>
>
>
> In his most recent post he says, "There are several ways to improve the
> system that I believe would foster clinical research and make peer review
> more objective and productive." I'm curious about what the people in the
> vision community think of these suggestions and whether they are realistic
> to implement in our field.  His list is at the bottom of the entry.
> http://www.fharrell.com/2017/11/
>
>
>
> For those experiencing TL;DR, here is the shortlist:
>
> ·         Have journals conduct reviews of background and methods without
> knowledge of results.
>
> ·         Abandon journals and use researcher-led online systems that
> invite open post-"publication" peer review and give researchers the
> opportunities to improve their "paper" in an ongoing fashion.
>
> ·         If not publishing the entire paper online, deposit the
> background and methods sections for open pre-journal submission review.
>
> ·         Abandon null hypothesis testing and p-values. Before that,
> always keep in mind that a large p-value means nothing more than "we don't
> yet have evidence against the null hypothesis", and emphasize confidence
> limits.
>
> ·         Embrace Bayesian methods that provide safer and more actionable
> evidence, including measures that quantify clinical significance. And if
> one is trying to amass evidence that the effects of two treatments are
> similar, compute the direct probability of similarity using a Bayesian
> model.
>
> ·         Improve statistical education of researchers, referees, and
> journal editors, and strengthen statistical review for journals.
>
> ·         Until everyone understands the most important statistical
> concepts, better educate researchers and peer reviewers on statistical
> problems to avoid <http://biostat.mc.vanderbilt.edu/ManuscriptChecklist>.
>
>
>
> Best,
>
> Pam Pallett
>
> _______________________________________________
> visionlist mailing list
> visionlist at visionscience.com
> http://visionscience.com/mailman/listinfo/visionlist_visionscience.com
>
>


-- 
Brad Wyble
Associate Professor
Psychology Department
Penn State University

http://wyblelab.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20171120/0e8c39ec/attachment-0001.html>


More information about the visionlist mailing list