<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div id="mainContent" class="columns large-9 medium-8" style="box-sizing: border-box; margin: 0px; padding: 0px 0.4375rem; width: 862.5px; float: left; position: relative; caret-color: rgb(28, 28, 28); color: rgb(28, 28, 28); font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 16px;"><div id="BodyContent_PageContent_ucSelfServe" style="box-sizing: border-box; margin: 0px; padding: 0px;" class=""><div class="selfServe" style="box-sizing: border-box; margin: 0px; padding: 0px; line-height: 1.5;"><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 13px;" class="">Friends,</div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 13px;" class=""><br class=""></div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 13px;" class="">Journal of Vision has published a Special Issue on Deep Neural Networks and Biological Vision. The issue remain open for further contributions.</div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 13px;" class=""><br class=""></div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 13px;" class="">Best,</div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 13px;" class=""><br class=""></div></div><div class="selfServe" style="box-sizing: border-box; margin: 0px; padding: 0px; line-height: 1.5;"><b style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;" class="">Andrew B. Watson<br class=""></b><span style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;" class="">Editor-in-Chief</span><br style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;" class=""><h1 style="box-sizing: border-box; font-size: 2.375rem; margin: 0.2rem 0px 0.25rem; padding: 0px; font-family: franklin_gothic, "Helvetica Neue", Helvetica, Arial, sans-serif; color: rgb(0, 0, 0); text-rendering: optimizeLegibility; line-height: 1.2; -webkit-font-smoothing: antialiased;" class=""><span style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;" class="">Journal of Vision <a href="http://journalofvision.org/" class="">http://journalofvision.org/</a></span></h1><h1 style="box-sizing: border-box; font-size: 2.375rem; margin: 0.2rem 0px 0.25rem; padding: 0px; font-family: franklin_gothic, "Helvetica Neue", Helvetica, Arial, sans-serif; color: rgb(0, 0, 0); text-rendering: optimizeLegibility; line-height: 1.2; -webkit-font-smoothing: antialiased;" class=""><br class=""></h1><h1 style="box-sizing: border-box; font-size: 2.375rem; margin: 0.2rem 0px 0.25rem; padding: 0px; font-family: franklin_gothic, "Helvetica Neue", Helvetica, Arial, sans-serif; color: rgb(0, 0, 0); text-rendering: optimizeLegibility; line-height: 1.2; -webkit-font-smoothing: antialiased;" class=""><br class=""></h1><h1 style="box-sizing: border-box; font-size: 2.375rem; margin: 0.2rem 0px 0.25rem; padding: 0px; font-family: franklin_gothic, "Helvetica Neue", Helvetica, Arial, sans-serif; color: rgb(0, 0, 0); text-rendering: optimizeLegibility; line-height: 1.2; -webkit-font-smoothing: antialiased;" class="">Deep Neural Networks and Biological Vision Special Issue</h1><h2 style="box-sizing: border-box; margin: 0.2rem 0px 0.25rem; padding: 0px 0px 0.25rem; font-family: franklin_gothic, "Helvetica Neue", Helvetica, Arial, sans-serif; font-weight: 400; color: rgb(0, 0, 0); text-rendering: optimizeLegibility; line-height: 1.2; font-size: 2.125rem; -webkit-font-smoothing: antialiased;" class="">Feature Editors</h2><p style="box-sizing: border-box; margin: 0px 0px 1.25rem; padding: 0px; font-size: 1rem; line-height: 1.8; text-rendering: optimizeLegibility; -webkit-font-smoothing: antialiased;" class=""><strong style="box-sizing: border-box; line-height: inherit;" class="">Nikolaus Kriegeskorte</strong>, Columbia University<br style="box-sizing: border-box;" class=""><strong style="box-sizing: border-box; line-height: inherit;" class="">Denis Pelli</strong>, New York University<br style="box-sizing: border-box;" class=""><strong style="box-sizing: border-box; line-height: inherit;" class="">Cristina Savin</strong>, New York University<br style="box-sizing: border-box;" class=""><strong style="box-sizing: border-box; line-height: inherit;" class="">Felix Wichmann</strong>, University of Tübingen</p><p style="box-sizing: border-box; margin: 0px 0px 1.25rem; padding: 0px; font-size: 1rem; line-height: 1.8; text-rendering: optimizeLegibility; -webkit-font-smoothing: antialiased;" class="">Articles will be added as they are published.</p><div class="article-box featured-articles" style="box-sizing: border-box; margin: 0px 0px 1rem; padding: 0px; border: 2px solid rgb(225, 219, 209);"><ul style="box-sizing: border-box; margin: 0px; padding: 0px; font-size: 0.9375rem; line-height: 1.45; list-style-position: outside; list-style-type: none;" class=""><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-style: none;" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="56B728DA-6AFA-4779-9294-3BB940BFDB5B" src="cid:4119968B-45D4-4B01-AF9F-21A6CF5C5824" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778843" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Contrast sensitivity functions in autoencoders</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Qiang Li, Alex Gomez-Villa, Marcelo Bertalmío, Jesús Malo</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision May 2022, Vol.22, 8. <a href="https://doi.org/10.1167/jov.22.6.8" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.6.8</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="5B1FF650-C611-4A8E-8088-6900A698F6C4" src="cid:C584544A-1824-467E-ABC0-AE995272E597" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778776" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">DeepGaze III: Modeling free-viewing human scanpaths with deep learning</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Matthias Kümmerer, Matthias Bethge, Thomas S. A. Wallis</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision April 2022, Vol.22, 7. <a href="https://doi.org/10.1167/jov.22.5.7" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.5.7</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="57EA5331-8198-4007-8EAD-CD7D284DD157" src="cid:65D4328E-635B-4719-8CC6-630E4FFE50C4" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778712" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Deep neural models for color classification and color constancy</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Alban Flachot, Arash Akbarinia, Heiko H. Schütt, Roland W. Fleming, Felix A. Wichmann, Karl R. Gegenfurtner</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision March 2022, Vol.22, 17. <a href="https://doi.org/10.1167/jov.22.4.17" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.4.17</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="4D128C7F-D17B-418F-84D8-796373651580" src="cid:B2C5AFDD-9D9E-4C1F-90C5-12B0D820AFAC" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778652" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Distinguishing mirror from glass: A “big data” approach to material perception</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Hideki Tamura, Konrad Eugen Prokott, Roland W. Fleming</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision March 2022, Vol.22, 4. <a href="https://doi.org/10.1167/jov.22.4.4" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.4.4</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="C0ADBB8F-9DF7-4DBB-B62A-3DA86A615B51" src="cid:7D69F78B-37AC-4BB6-96F6-290389B360C0" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778616" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">End-to-end optimization of prosthetic vision</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Jaap de Ruyter van Steveninck, Umut Güçlü, Richard van Wezel, Marcel van Gerven</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision February 2022, Vol.22, 20. <a href="https://doi.org/10.1167/jov.22.2.20" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.2.20</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="B82D8ABE-C0CD-4129-9BA1-88D9F800C6B3" src="cid:589D4BB6-CB36-4222-A236-EAE32CB2323D" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778612" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Inference via sparse coding in a hierarchical vision model</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Joshua Bowren, Luis Sanchez-Giraldo, Odelia Schwartz</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision February 2022, Vol.22, 19. <a href="https://doi.org/10.1167/jov.22.2.19" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.2.19</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="A774A47F-48FF-43EC-9D6D-6EB5E1F3832F" src="cid:0425A143-6DBC-45D1-A30E-3E34DD98A959" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778420" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">From photos to sketches - how humans and deep neural networks process objects across different levels of visual abstraction</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Johannes J. D. Singer, Katja Seeliger, Tim C. Kietzmann, Martin N. Hebart</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision February 2022, Vol.22, 4. <a href="https://doi.org/10.1167/jov.22.2.4" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.2.4</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="AAFC0767-D1C7-45FE-9C33-15B209790604" src="cid:51509EBB-3BCE-4123-8DB2-DC5437DB1703" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778264" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">FP-nets as novel deep networks inspired by vision</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Philipp Grüning, Thomas Martinetz, Erhardt Barth</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision January 2022, Vol. 22, 8. <a href="https://doi.org/10.1167/jov.22.1.8" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.22.1.8</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="064BF391-2A8D-40A6-9D56-03F74629CCD7" src="cid:CCF432A4-31F7-4EAC-860A-A2BF6EA3E82E" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778207" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Seoyoung Ahn, Gregory J. Zelinsky, Gary Lupyan</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision December 2021, Vol. 21, 13. <a href="https://doi.org/10.1167/jov.21.13.13" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.13.13</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="10236C93-6DEF-4E68-BCA7-FBA946FDB096" src="cid:CC7EB4E2-B931-46D5-8E36-CE1A00BF408C" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778154" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Recurrent processing improves occluded object recognition and gives rise to perceptual hysteresis</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Markus R. Ernst, Thomas Burwick, Jochen Triesch</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision December 2021, Vol.21, 6. <a href="https://doi.org/10.1167/jov.21.13.6" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.13.6</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="DAAB8326-6F1A-4A22-8355-34C9B4ECD1D9" src="cid:5DB0DA59-837A-4A18-94C0-997A788F65A1" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778109" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Gloss perception: Searching for a deep neural network that behaves like humans</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Konrad Eugen Prokott, Hideki Tamura, Roland W. Fleming</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision November 2021, Vol.21, 14. <a href="https://doi.org/10.1167/jov.21.12.14" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.12.14</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="81222070-A38A-49F1-86D5-7385C61F6198" src="cid:131F06FD-B261-472A-953E-614C6D640FA9" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778069" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Convolutional neural networks trained with a developmental sequence of blurry to clear images reveal core differences between face and object processing</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Hojin Jang, Frank Tong</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision October 2021, Vol.21, 6. <a href="https://doi.org/10.1167/jov.21.12.6" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.12.6</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="4740DE6F-EF4E-47AE-8855-9B8005D9EEA3" src="cid:4E7F36F2-6686-4392-9445-34F376C33DB9" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2778014" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">ImageNet-trained deep neural networks exhibit illusion-like response to the Scintillating grid</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Eric D. Sun, Ron Dekel</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision October 2021, Vol.21, 15. <a href="https://doi.org/10.1167/jov.21.11.15" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.11.15</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="90893851-F6B2-4C32-8BCD-12CF2300E4F9" src="cid:47C0A03F-C269-4769-8AA4-EE7654669C51" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2777974" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Evaluating the progress of deep learning for visual relational concepts</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Sebastian Stabinger, David Peer, Justus Piater, Antonio Rodríguez-Sánchez</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision October 2021, Vol.21, 8. <a href="https://doi.org/10.1167/jov.21.11.8" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.11.8</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="07CB3335-79D7-42DE-A13A-06441CDC4CCD" src="cid:A29C066E-94B5-4DF5-A544-FEB9313E3824" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2777922" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">A comparative biology approach to DNN modeling of vision: A focus on differences, not similarities</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Ben Lonnqvist, Alban Bornet, Adrien Doerig, Michael H. Herzog</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision September 2021, Vol.21, 17. <a href="https://doi.org/10.1167/jov.21.10.17" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.10.17</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="0C59C26C-7DC3-4D57-ADD6-7692D35280C9" src="cid:426E86F4-64DD-49BC-B154-11F08D000FB0" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2777908" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Training for object recognition with increasing spatial frequency: A comparison of deep learning with human vision</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Lev Kiar Avberšek, Astrid Zeman, Hans Op de Beeck</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision September 2021, Vol.21, 14. <a href="https://doi.org/10.1167/jov.21.10.14" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.10.14</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="D5D3F48B-B4D8-4155-AA15-76D772D04E0E" src="cid:E452DE7F-D863-45C5-8EE4-EE2699097E17" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2777897" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Which deep learning model can best explain object representations of within-category exemplars?</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Dongha Lee</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision September 2021, Vol.21, 12. <a href="https://doi.org/10.1167/jov.21.10.12" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.10.12</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="49ACDBBC-8C6F-4357-8AC0-1629D3C66789" src="cid:657DE1D5-C37B-438A-BDCF-D813A2CC0F33" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2776569" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Connor J. Parde, Y. Ivette Colón, Matthew Q. Hill, Carlos D. Castillo, Prithviraj Dhar, Alice J. O’Toole</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision August 2021, Vol. 21, 15. <a href="https://doi.org/10.1167/jov.21.8.15" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.8.15</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="04F73975-65C2-41AF-BF2E-5EAD7237969A" src="cid:13E6417F-7BEA-48A2-9DC9-FA4B9AF99E6F" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2776554" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Biased orientation representations can be explained by experience with nonuniform training set statistics</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Margaret Henderson, John T. Serences</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision August 2021, Vol. 21, 10. <a href="https://doi.org/10.1167/jov.21.8.10" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.8.10</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="3FEFE633-2832-46E3-8FC3-7EFDB13850AB" src="cid:7C12A9BC-CFBF-4592-B07B-DEF4B86B8CD5" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2776493" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Binocular vision supports the development of scene segmentation capabilities: Evidence from a deep learning model</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Ross Goutcher, Christian Barrington, Paul B. Hibbard, Bruce Graham</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision July 2021, Vol.21, 13. <a href="https://doi.org/10.1167/jov.21.7.13" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.7.13</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="6EB1A851-3FCE-4381-9EA6-F0ACD5750A4A" src="cid:53304E0C-1673-4AB7-A4C4-82C3A26FF246" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2776469" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">A self-supervised deep neural network for image completion resembles early visual cortex fMRI activity patterns for occluded scenes</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Michele Svanera, Andrew T. Morgan, Lucy S. Petro, Lars Muckli</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision July 2021, Vol.21, 5. <a href="https://doi.org/10.1167/jov.21.7.5" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.7.5</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="313804EF-950E-4C1B-B59E-86A7BBF52BB4" src="cid:AD386E6B-6736-4AA3-B401-21B99BEA5166" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2772585" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">A deep-learning framework for human perception of abstract art composition</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Pierre Lelièvre, Peter Neri </span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision May 2021, Vol.21, 9. <a href="https://doi.org/10.1167/jov.21.5.9" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.5.9</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="3CC189B5-200E-4D63-9EE2-A5031796009A" src="cid:D14A6184-547E-4DF4-A2B8-6AB9C62B6C62" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2772452" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Facial expression is retained in deep networks trained for face identification</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Y. Ivette Colón, Carlos D. Castillo, Alice J. O’Toole</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision April 2021, Vol.21, 4. <a href="https://doi.org/10.1167/jov.21.4.4" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.4.4</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="470543BC-EF60-4ABF-A4C4-9E044ADD728E" src="cid:E157606C-5C13-4B56-9831-6CFE64B0C8CE" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2772393" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Five points to check when comparing visual perception in humans and machines</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Christina M. Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S. A. Wallis, Matthias Bethge</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision March 2021, Vol.21, 16. <a href="https://doi.org/10.1167/jov.21.3.16" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.3.16</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="F786C59D-F665-47A0-9BCF-F560EB149CC2" src="cid:7D8F8111-F2A2-431B-95DF-5DD65D3BC841" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2772326" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Exploring and explaining properties of motion processing in biological brains using a neural network</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Reuben Rideaux, Andrew E. Welchman</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision February 2021, Vol.21, 11. <a href="https://doi.org/10.1167/jov.21.2.11" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.2.11</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="812C4FC7-59E7-4382-9EB9-68982F7D2988" src="cid:DC95AD7B-D847-4862-B87A-27F9BEBB8F39" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2772320" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">The human visual system and CNNs can both support robust online translation tolerance following extreme displacements</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Ryan Blything, Valerio Biscione, Ivan I. Vankov, Casimir J. H. Ludwig, Jeffrey S. Bowers</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision February 2021, Vol.21, 9. <a href="https://doi.org/10.1167/jov.21.2.9" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.21.2.9</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="9C6D17D9-E094-4465-A376-FDD1B8B08EE7" src="cid:A2474FB5-D5C9-497F-8229-02C03360B02B" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2772000" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Selectivity and robustness of sparse coding networks</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Dylan M. Paiton, Charles G. Frye, Sheng Y. Lundquist, Joel D. Bowen, Ryan Zarcone, Bruno A. Olshausen</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision November 2020, Vol.20, 10. <a href="https://doi.org/10.1167/jov.20.12.10" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.20.12.10</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="E0E36D4B-F553-43F8-9C9D-804FE5F01B7D" src="cid:04EF370F-5DCA-4B00-9859-5B283A9A4820" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2770680" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">A dual foveal-peripheral visual processing model implements efficient saccade selection</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Emmanuel Daucé, Pierre Albiges, Laurent U. Perrinet</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision August 2020, Vol.20, 22. <a href="https://doi.org/10.1167/jov.20.8.22" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/jov.20.8.22</a></span></li><li style="box-sizing: border-box; margin: 0px 0px 0.5rem; padding: 0.75rem 1rem; line-height: 1.5; border-top-width: 1px; border-top-style: solid; border-top-color: rgb(225, 219, 209);" class=""><div class="ab-cover" style="box-sizing: border-box; margin: 0px 1.25rem 0px 0px; padding: 0px; width: 140px; float: left;"><img alt="" style="box-sizing: border-box; border: 0px; max-width: 100%; display: inline-block; vertical-align: middle;" apple-inline="yes" id="675ECCFD-8D18-4D8D-A1EC-156615D9E263" src="cid:AE581381-ACDC-4FA2-B032-D16BFF083AF3" class=""></div><div class="ab-data" style="box-sizing: border-box; margin: 0px; padding: 0px; overflow: hidden;"><a class="ab-title" href="https://jov.arvojournals.org/Article.aspx?articleid=2717771" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: 1.5; margin-bottom: 0.625rem; display: block; font-size: 1rem; font-weight: 600;">Deep learning—Using machine learning to study biological vision</a><span class="article-accessType OpenAccess left-flag" style="box-sizing: border-box; padding: 0px 0.25rem; margin-bottom: 0.125rem; font-size: 0.8125rem; line-height: 1.4; font-weight: 600; vertical-align: 2px; color: rgb(255, 255, 255); text-transform: uppercase; display: inline-block; background-color: rgb(224, 119, 48);">OPEN ACCESS</span></div><span style="box-sizing: border-box; color: black;" class="">Najib J. Majaj, Denis G. Pelli</span><br style="box-sizing: border-box;" class=""><span style="box-sizing: border-box; color: gray;" class="">Journal of Vision December 2018, Vol.18, 2. <a href="https://doi.org/10.1167/18.13.2" style="box-sizing: border-box; color: rgb(27, 102, 191); text-decoration: none; line-height: inherit;" class="">https://doi.org/10.1167/18.13.2</a></span></li></ul></div></div></div></div><div class="large-3 columns medium-4" style="box-sizing: border-box; margin: 0px; padding: 0px 0.4375rem; width: 287.5px; float: right; position: relative; caret-color: rgb(28, 28, 28); color: rgb(28, 28, 28); font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 16px;"><div id="BodyContent_PageContent_ucAdvertisingBlock_adSkyScraperDiv" class="advertisement ad-skyscraper text-center" style="box-sizing: border-box; margin: 0px; padding: 0px; text-align: center !important; overflow: hidden;"><div class="ad-text" style="box-sizing: border-box; margin: 0px; padding: 0.3125rem 0px; font-size: 0.75rem; color: rgb(102, 102, 102);">Advertisement</div></div></div>
<br class=""><div class="">
<div dir="auto" style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br class=""></div><div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><br class=""></div><br class="Apple-interchange-newline"></div></div><br class="Apple-interchange-newline"><br class="Apple-interchange-newline">
</div><div class=""><br class=""></div><div class=""><br class=""></div><br class="">
<br class=""></body></html>