<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/xhtml; charset=utf-8">
<style>
div.markdown { white-space: normal; }
div.plaintext { white-space: normal; }
body { font-family: sans-serif; }
h1 { font-size: 1.4em; }
h2 { font-size: 1.2em; }
h3 { font-size: 1.1em; }
blockquote { margin: 0 0 5px; padding-left: 5px; border-left: 2px solid #777777; color: #777777; }
blockquote blockquote { border-left-color: #999999; color: #999999; }
blockquote blockquote blockquote { border-left-color: #BBBBBB; color: #BBBBBB; }
a { color: #3983C4; }
blockquote a { color: #777777; }
blockquote blockquote a { color: #999999; }
blockquote blockquote blockquote a { color: #BBBBBB; }
math[display="inline"] > mrow { padding:5px; }
div.footnotes li p { margin: 0.2em 0; }
</style>
</head>
<body>
<div class="markdown">
<p dir="auto">Dear Dr. Bach,</p>
<p dir="auto">since some of my colleagues (not in psychology, but rather in the humanities) frequently express their suspicion that younger colleagues use AI to arrive at outlandish views, I took the liberty of submitting your question/statement to ChatGPT, just out of curiosity. Below you can read what came out of it. Apparently, ChatGPT or AI in general does not seem to be responsible for the change in the understanding of nAFC tasks you describe :) (not that you claimed that...)</p>
<p dir="auto">Best wishes from Giessen<br>
Mathias Hegele</p>
<p dir="auto">PS: It is particularly funny that it sort of offers to co-author a paper on this issue ;)</p>
<p dir="auto">ChatGPT’s response:</p>
<p dir="auto">You’ve raised an important and nuanced concern about the understanding and application of forced-choice paradigms in psychophysics. Let me break down and address the key points in your query:</p>
<ol>
<li value=1>The Historical Understanding of nAFC and "Equivalent" Alternatives</li>
</ol>
<p dir="auto">You are correct that traditionally, the forced-choice paradigm—especially n-alternative forced choice (nAFC)—was designed to mitigate bias or criterion effects by presenting equivalent alternatives. Equivalence ensures that the observer is not predisposed to favor one alternative due to inherent differences in their salience or likelihood. For instance:<br>
In a 2AFC task (e.g., left vs. right or first vs. second interval), the alternatives are typically symmetric and balanced.<br>
In more complex tasks (e.g., a Landolt C gap among multiple orientations), equivalence can be harder to achieve due to perceptual effects like the oblique effect.<br>
Historically, equivalence was indeed seen as a critical feature of nAFC paradigms, as it justified the assumption that bias would not skew the results. This allowed researchers to directly infer sensitivity from performance without needing to disentangle bias and sensitivity measures.</p>
<ol>
<li value=2>Shift in Understanding or Application</li>
</ol>
<p dir="auto">Your observation that younger scientists might not understand nAFC paradigms in this way reflects a shift in either how the paradigm is taught or how its principles are applied. It’s possible that the emphasis on equivalence has waned, leading to looser interpretations of what constitutes "forced choice."<br>
Tasks such as "seen/not seen" or "go/no-go" are, as you noted, fundamentally different because the alternatives are not equivalent:<br>
"Seen/not seen" inherently invites a bias due to differing subjective criteria for detection.<br>
"Go/no-go" relies heavily on the observer's decision threshold and is particularly sensitive to criterion effects.<br>
These designs require explicit measures like d′ (sensitivity index) or other signal detection metrics to separate sensitivity from bias—a step that might not be consistently appreciated in modern interpretations.</p>
<ol>
<li value=3>Scientific Folklore and the Myth of Bias Elimination</li>
</ol>
<p dir="auto">The "scientific folklore" that forced-choice paradigms eliminate bias is a misinterpretation that can lead to flawed study designs and incorrect conclusions:<br>
When alternatives are equivalent (e.g., standard nAFC): Bias effects are minimized because the task structure does not favor one response over another.<br>
When alternatives are not equivalent (e.g., seen/not seen): Bias persists and must be accounted for explicitly, regardless of the "forced choice" label.<br>
The assumption that merely adding a “not seen” button or another unequal option creates a forced-choice design is problematic. While technically such tasks might involve a forced choice, they fail to eliminate bias in the way that equivalent-alternative designs do.</p>
<ol>
<li value=4>The Real Problem: Perceptual Learning and Longitudinal Studies</li>
</ol>
<p dir="auto">Your concern about perceptual learning and bias in longitudinal studies is particularly salient. Perceptual learning often changes both sensitivity and decision criteria, which can confound results in poorly designed forced-choice tasks. If bias effects are not properly addressed, one might mistakenly attribute changes in performance to sensitivity improvements when they are actually due to shifts in criterion.<br>
To avoid this pitfall:</p>
<p dir="auto">Ensure equivalence in alternatives for nAFC tasks whenever possible.<br>
If equivalence is impractical, incorporate signal detection measures (e.g., d′, criterion) or other analytic tools to separate sensitivity and bias.<br>
Use complementary tasks or control conditions to validate that observed effects reflect changes in perceptual sensitivity rather than decision biases.</p>
<ol>
<li value=5>Practical Recommendations</li>
</ol>
<ul>
<li>Teaching and Communication: It’s important to emphasize in training and publications that the true strength of nAFC paradigms lies in their equivalence of alternatives, which minimizes bias.</li>
<li>Standardization of Terminology: Clarify the distinction between forced-choice paradigms with equivalent alternatives (true nAFC) and other designs (e.g., yes/no tasks with a “forced-choice” label) to prevent misuse or misunderstanding.</li>
<li>Experimental Design: For studies where equivalence is challenging, be explicit about how bias will be addressed analytically (e.g., using signal detection theory).</li>
<li>Awareness of Limitations: Encourage researchers to acknowledge the limitations of their forced-choice designs when equivalence is not achieved.</li>
</ul>
<ol>
<li value=6>Your Intuition is Right</li>
</ol>
<p dir="auto">You are absolutely correct that forced-choice paradigms do not inherently eliminate bias unless the alternatives are equivalent. Without this equivalence, the assumption that performance solely reflects sensitivity breaks down, requiring additional analytical steps.</p>
<p dir="auto">It may be valuable to publish or share these insights more widely, as they address a gap in understanding that has practical implications for the field. I’d be happy to assist in drafting a more formal exposition or any other next steps you envision!</p>
<p dir="auto">On 21 Jan 2025, at 15:17, Michael Bach wrote:</p>
</div>
<div class="plaintext"><blockquote><p dir="auto">Dear Colleagues:<br>
<br>
I have the impression that the general understanding of a psychophysical forced-choice paradigm has changed over the decades.<br>
<br>
To my understanding, bias/criterion effects can only be effectively mitigated in an nAFC (n-alternative forced choice) design where the alternatives are _equivalent_. Examples: the pattern is on the right/left (balanced, of course), the stimulus was in the first/second interval, or the gap of a Landolt C is in one of 8 directions. “Equivalence” may be challenging to achieve, particularly in the latter case, where the oblique effect may interfere. Alternatives such as “seen/not seen” or “go/no” are not equivalent and necessitate measures (d’, …) to address bias/criterion.<br>
<br>
For me, nAFC implicitly implied equivalent alternatives, but was this ever the case? Whatever, young scientists do not seem to understand it this way.<br>
<br>
While this might be just seen as a nomenclature trifle, the real problem arises due to the prevalent “scientific folklore” that a forced-choice paradigm eliminates the effects of bias / criterion change (which, in particular, occur through perceptual learning in longitudinal studies). However, if forced-choice does not employ _equivalent_ alternatives, this assertion is invalid.<br>
<br>
I have encountered opinions suggesting that the addition of a “not seen” button constitutes forced choice (yes…, but:) and also eliminates bias…<br>
<br>
<br>
Looking forward to your advice, best, Michael<br>
-- <br>
<a href="https://michaelbach.de">https://michaelbach.de</a><br>
<br>
<br>
_______________________________________________<br>
visionlist mailing list<br>
visionlist@visionscience.com<br>
<a href="http://visionscience.com/mailman/listinfo/visionlist_visionscience.com">http://visionscience.com/mailman/listinfo/visionlist_visionscience.com</a></p>
</blockquote></div>
<div class="markdown">
</div>
</body>
</html>