[visionlist] visionlist Digest, Vol 6, Issue 93
davecarmel at nyu.edu
Thu Jul 22 15:43:47 GMT 2010
I'm afraid I have to agree with Diana that there is a problem with point 2 -
but it goes beyond the false alarm terms in the d' calculations canceling
out. At a artihmetic level, using an identical term (false alarm rates) in
two calculations, each of which only contains one other term (the hit rate
in each), means that the identical term isn't adding anything - so you could
equally well just compare the hit rates, as Todd suggested.
But when calculating d' there is the additional step of converting the rates
to the probabilities associated with them in the normal distribution - and
due to the shape of the distribution, equal densities within the
distribution translate into different distances on the x axis.
For example, say you have an FA rate of 20%. Consider what happens if you
calculate d' for two different scenarios: one where the hit rates are 65%
and 75%, and the other where they are 85% and 95%. So there's a 10% hit-rate
difference in each, but in the first case the d' scores are 1.23 and 1.52 -
a difference of 0.29; in the second case, the d' scores are 1.88 and 2.49 -
a difference of 0.61.
So identical hit rate differences can lead to massively different d'
differences, if the FA rate is the same in each calculation. If the FA rates
are drawn from appropriate experimental conditions - i.e., each comes from
the same condition as the corresponding hit rate - and they just happen to
be the same, then that's fine and actually reflects something. But if the FA
is the same because it is simply the same number used twice, then the effect
on the d' score is artifactual.
The solution, as one of the previous responders here suggested, is to have
the two types of trial - weak and strong TMS - in separate blocks, mixed
with different items of the comparison type (new pictures) in each. This way
you get separate hit rates AND separate false alarm rates. We recently did
this in a paper where the logical structure of the conditions was very
similar to yours (though the actual content was different - we wanted to see
whether the emotional content of words was easier to detect for negative or
for positive words, each compared with neutral words - see Nasrallah, Carmel
& Lavie, Emotion 2009).
> Message: 5
> Date: Thu, 22 Jul 2010 01:06:30 +0100
> From: "Kornbrot, Diana" <d.e.kornbrot at herts.ac.uk>
> Subject: Re: [visionlist] signal detection query
> To: Joseph Brooks <joseph.brooks at ucl.ac.uk>
> Cc: "visionlist at visionscience.com" <visionlist at visionscience.com>
> Message-ID: <C86D4A16.6B8BA%d.e.kornbrot at herts.ac.uk<C86D4A16.6B8BA%25d.e.kornbrot at herts.ac.uk>
> Content-Type: text/plain; charset="iso-8859-1"
> HI Joseph,
> As Todd & others have clearly described there is no difficulty with problem
> As for problem 2, unfortunately the outside TMS fa rate CANNOT provide any
> information whatsoever as to whether differences in hit rate under strong
> and weak TMS are due to differences in d' or differences in bias.
> Simple algebra shows why:
> d'(strong TMS) = z(hit, strong TMS) - z(fa, no TMS)
> d'(weak TMS) = z(hit, weak TMS) - z(fa, no TMS)
> d'(strong TMS) - d'(weak TMS) = z(hit, strong TMS) = z(hit, weak TMS),
> the contribution form the fa rate is eliminated
> If z(hit, strong TMS) IS different from z(hit, weak TMS) this could be due
> to better discriminability, or to people in the strong TMS state being more
> [or less] biased towards 'yes'. The fa rate outside TMS cannot provide a
> solution. It is necessary to have separate FA rates under the two TMS
> stimulation conditions in order to draw any conclusion about d'
> Don't shoot the messenger
> On 21/07/2010 16:13, "Todd S. Horowitz" <toddh at search.bwh.harvard.edu>
> Daniel, Joseph
> I think we're all agreed now on point (1) :)
> As to point (2), I don't think Daniel's objection is a problem for Joseph's
> study, since the point is not to compare old stimuli+TMS to new stimuli
> without TMS, but to compare old TMS and old non-TMS stimuli; the new stimuli
> are there simply to measure the false alarm rate.
> Similarly, I think this dispenses with Daniel's other objection. It's true
> that the Gaussian equal-variance assumptions probably do not apply, so that
> d' is not independent of criterion. However, since all of the stimuli are
> being tested in the same block of trial, criterion should be constant, so
> the d's will be comparable.
> However, this makes me wonder why bother to compute SDT measures at all.
> Since the false alarm rate should be constant for both classes of stimuli,
> why not just compare % correct?
> On Jul 21, 2010, at 4:26 AM, Daniel Oberfeld wrote:
> Hi Joseph,
> Re (1) : If you use the correct formula for calculating d', then it will
> automatically correct for unequal numbers of old and new pictures.
> Re (2): I think this is no problem for calculating the SDT statistics, but
> rather for the interpretation of your results - does it make sense to
> compare responses to old stimuli+TMS and responses to new stimuli without
> There is one very serious issue with calculating d' for your data, however.
> In case you collected binary responses ("Is the picture old or new?"), then
> for calculating d' you will have to assume that the internal distributions
> for "signal" and "noise" have identical standard deviations (cf. Macmillan &
> Creelman, 2005). It is known for a long time that this assumption is
> frequently incorrect for experimental data (e.g., Swets, 1986). And thus d'
> is not a valid measure of sensitivity because it is strongly influenced by
> response bias (Verde, MacMillan, & Rotello, 2006).
> The simple solution (at least for future experiments) is to obtain rating
> responses rather than binary responses - with these responses, you can
> caculate for example the area under the ROC curve, which is a valid index of
> sensitivity even if the SDs of the internal distributions are unequal
> (Swets, 1986). Again, Macmillan & Creelman (2005) explain in detail how to
> conduct such an experiment.
> Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: A user's
> guide (2. ed.). Mahwah, NJ [et al.]: Lawrence Erlbaum Associates.
> Swets, J. A. (1986). Indices of discrimination or diagnostic accuracy:
> their ROCs and implied models. Psychological Bulletin, 99(1), 100-117.
> Verde, M. F., MacMillan, N. A., & Rotello, C. M. (2006). Measures of
> sensitivity based on a single hit rate and false alarm rate: The accuracy,
> precision, and robustness of d ', A(z), and A '. Perception and
> Psychophysics, 68(4), 643-654.
> Professor Diana Kornbrot
> email: d.e.kornbrot at herts.ac.uk
> web: http://web.mac.com/kornbrot/iweb/KornbrotHome.html
> School of Psychology
> University of Hertfordshire
> College Lane, Hatfield, Hertfordshire AL10 9AB, UK
> voice: +44 (0) 170 728 4626
> mobile: +44 (0) 796 890 2102
> fax +44 (0) 170 728 5073
> 19 Elmhurst Avenue
> London N2 0LT, UK
> landline: +44 (0) 208 883 3657
> mobile: +44 (0) 796 890 2102
> fax: +44 (0) 870 706 4997
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> visionlist mailing list
> visionlist at visionscience.com
> End of visionlist Digest, Vol 6, Issue 93
Dr David Carmel
Department of Psychology &
Center for Neural Science
New York University
6 Washington Place, 8th floor
New York, NY 10003
tel (212) 998-8233
email davecarmel at nyu.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the visionlist