[visionlist] Statistical Criticism is Easy; I Need to RememberThat Real People are Involved
Lester Loschky
loschky at ksu.edu
Sat Nov 18 22:51:45 -05 2017
I think that was what Todd was trying to say, but with a good dose of irony
added.
On Fri, Nov 17, 2017 at 7:57 AM, Roland Fleming <
Roland.W.Fleming at psychol.uni-giessen.de> wrote:
>
> Hi Todd,
>
> I’m pretty sure that’s why they are advocating Bayesian approaches that
> (supposedly) do allow you to evaluate the evidence for the null hypothesis,
> as in:
>
> Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G.
> (2009). Bayesian t tests for accepting and rejecting the null hypothesis.
> Psychonomic bulletin & review, 16(2), 225-237.
>
> — R
>
>
> > On 16 Nov 2017, at 21:45, Horowitz, Todd (NIH/NCI) [E] <
> todd.horowitz at nih.gov> wrote:
> >
> > Oops!
> >
> > What I meant to say was that a p-value tells you how likely your data
> are given the null hypothesis, it doesn’t really say anything about the
> probability of the null hypothesis. So a SMALL p-value means that my data
> are unlikely given the null hypothesis, and a LARGE p-value means my data
> are likely given the null-hypothesis... but they could be even more
> compatible with some other hypothesis!
> >
> > thanks
> > Todd
> >
> > From: Horowitz, Todd (NIH/NCI) [E] [mailto:todd.horowitz at nih.gov]
> > Sent: Thursday, November 16, 2017 2:00 PM
> > To: Pam Pallett <ppallett at gmail.com>; visionlist at visionscience.com
> > Subject: Re: [visionlist] Statistical Criticism is Easy; I Need to
> Remember That Real People are Involved
> >
> > I thought that a a large p-value simply meant that my data were unlikely
> given the null-hypothesis, a statement which yields no evidence about
> either the null- or alternative hypotheses.
> >
> > From: Pam Pallett [mailto:ppallett at gmail.com]
> > Sent: Thursday, November 16, 2017 10:29 AM
> > To: visionlist at visionscience.com
> > Subject: [visionlist] Statistical Criticism is Easy; I Need to Remember
> That Real People are Involved
> >
> > Hi All,
> >
> > I came across a blog today by Frank Harrell, Professor of Biostatistics
> and Founding Chair at Vanderbilt. His most recent post is the title of
> this email. But as I'm reading through his blog, I'm hearing a lot that
> has been discussed and experienced by professors and postdocs subscribed to
> this list. We are often very separated from our neighboring departments,
> and I actually found some comfort in the fact that these problems seem
> spread across the board (misery loves company). Even if we have been
> echoing these problems for over a decade with little effective change.
> >
> > In his most recent post he says, "There are several ways to improve the
> system that I believe would foster clinical research and make peer review
> more objective and productive." I'm curious about what the people in the
> vision community think of these suggestions and whether they are realistic
> to implement in our field. His list is at the bottom of the entry.
> http://www.fharrell.com/2017/11/
> >
> > For those experiencing TL;DR, here is the shortlist:
> > · Have journals conduct reviews of background and methods
> without knowledge of results.
> > · Abandon journals and use researcher-led online systems that
> invite open post-"publication" peer review and give researchers the
> opportunities to improve their "paper" in an ongoing fashion.
> > · If not publishing the entire paper online, deposit the
> background and methods sections for open pre-journal submission review.
> > · Abandon null hypothesis testing and p-values. Before that,
> always keep in mind that a large p-value means nothing more than "we don't
> yet have evidence against the null hypothesis", and emphasize confidence
> limits.
> > · Embrace Bayesian methods that provide safer and more
> actionable evidence, including measures that quantify clinical
> significance. And if one is trying to amass evidence that the effects of
> two treatments are similar, compute the direct probability of similarity
> using a Bayesian model.
> > · Improve statistical education of researchers, referees, and
> journal editors, and strengthen statistical review for journals.
> > · Until everyone understands the most important statistical
> concepts, better educate researchers and peer reviewers on statistical
> problems to avoid.
> >
> > Best,
> > Pam Pallett
> > _______________________________________________
> > visionlist mailing list
> > visionlist at visionscience.com
> > http://visionscience.com/mailman/listinfo/visionlist_visionscience.com
>
>
> _______________________________________________
> visionlist mailing list
> visionlist at visionscience.com
> http://visionscience.com/mailman/listinfo/visionlist_visionscience.com
>
--
Lester Loschky
Professor
Associate Director, Cognitive and Neurobiological Approaches to Plasticity
Center
Department of Psychological Sciences
471 Bluemont Hall
1114 Mid-Campus Dr North
Kansas State University
Manhattan, KS 66056-5302
Phone: 785-532-6882
E-mail: loschky at ksu.edu
research page: http://www.k-state.edu/psych/research/loschkylester.html
Lab page: www.k-state.edu/psych/vcl/
<http://www.k-state.edu/psych/vcl/index.html>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20171118/a46a3101/attachment.html>
More information about the visionlist
mailing list