<div dir="ltr"><div class="gmail_default" style=""><font face="trebuchet ms, sans-serif"><br clear="all"></font></div><div><div class="gmail_default" style=""><font face="trebuchet ms, sans-serif"><span id="gmail-docs-internal-guid-da01800b-7fff-8480-8015-4c51b2d61b92" style=""><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:0pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The British Machine Vision Conference (BMVC) is one of the major international conferences on computer vision and related areas. It is organised by the British Machine Vision Association (BMVA). The 33rd BMVC will now be a </span><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">hybrid event</span><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> from 21st—24th November 2022. Our local in person meeting will be held at The Kia Oval (Home of Surrey County Cricket Club, </span><a href="https://events.kiaoval.com/" style="text-decoration-line:none"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://events.kiaoval.com/</span></a><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">). </span></p><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Authors are invited to submit full-length high-quality papers in image processing, computer vision, machine learning and related areas for BMVC 2022. Submitted papers will be refereed on their originality, presentation, empirical results, and quality of evaluation. Accepted papers will be included in the conference proceedings published and DOI-indexed by BMVA. Past proceedings can be found online:</span><a href="https://britishmachinevisionassociation.github.io/bmvc" style="text-decoration-line:none"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">here</span></a><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">.</span></p><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Please note that BMVC is a single-track meeting with oral and poster presentations. The abstract submission deadline is Friday 22nd July 2022 and the paper submission deadline is Friday 29th July 2022 (both 23:59, GMT). Submission instructions are available on the </span><a href="https://bmvc2022.org/" style="text-decoration-line:none"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">BMVC 2022 website</span></a><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">. Submitted papers should not exceed 9 pages (references are excluded, but appendices are included). </span></p><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Topics include, but are not limited to:</span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">2D object recognition</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">3D computer vision </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">3D object recognition</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Action and behavior recognition </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Adversarial learning, adversarial attack and defense methods</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Biometrics, face, gesture, body pose</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Computational photography</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Datasets and evaluation </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Efficient training and inference methods for networks </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Explainable AI, fairness, accountability, privacy, transparency and ethics in vision </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Image and video retrieval </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Image and video synthesis</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Image classification</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Low-level and physics-based vision </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Machine learning architectures and formulations </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Medical, biological and cell microscopy </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Motion and tracking </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Optimization and learning methods </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Pose estimation</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Representation learning</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Scene analysis and understanding </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Transfer, low-shot, semi- and un- supervised learning </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Video analysis and understanding </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Vision + language, vision + other modalities </span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Vision applications and systems, vision for robotics and autonomous vehicles</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:12pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">“Brave new ideas”</span></p></li></ul><br><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:0pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Papers submitted under the “Brave new ideas” subject area are expected to move away from incremental benchmark gains.</span><span style="font-size:10.5pt;color:rgb(60,64,67);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Proposed ideas should be radically different from the current strand of research or propose a novel problem.</span></p><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Reviewing process BMVC 2022 </span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Each paper will be reviewed by at least three reviewers. The primary AC will also provide a meta review, summarising the points that need to be addressed during the rebuttal phase.</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The authors will have a period to produce a rebuttal to address the reviewers concerns. Due to the tight schedule, there will be no revision of the papers before the final camera ready submission.</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The rebuttal will be handled by two ACs, a primary and a secondary, who will facilitate paper discussion and jointly make the recommendations. Conflicts will be jointly managed by the ACs and Program Chairs that will make the final decisions.</span></p></li></ul><p dir="ltr" style="line-height:1.38;text-align:justify;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Please Note</span><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">: Due the anticipated volume of papers for BMVC 2022 (based on recent year’s experience) there will be NO extension granted to the submission deadline. In keeping with conferences in the field (e.g.</span><a href="https://medium.com/@NeurIPSConf/getting-started-with-neurips-2020-e350f9b39c28" style="text-decoration-line:none"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">NeurIPS</span></a><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">, </span><a href="http://cvpr2021.thecvf.com/node/33#policies" style="text-decoration-line:none"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">CVPR</span></a><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">) and to cope with the increasing number of submissions, we ask that all authors be prepared to review papers and make use of a compulsory abstract submission deadline a week before the paper submission deadline. The CMT submission site will ask authors to acknowledge this commitment and failure to engage with the reviewing process might be grounds for rejection.</span></p><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Any queries to the Programme Chairs should be sent to <a href="mailto:pcs@bmvc2022.org">pcs@bmvc2022.org</a>.</span></p></span>BMVC Organizing team</font></div><div class="gmail_default" style=""><font face="trebuchet ms, sans-serif"><a href="https://bmvc2022.org/people/organisers/">https://bmvc2022.org/people/organisers/</a></font></div></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><font face="garamond, serif" size="2">Bei Xiao, PhD<br>Associate Professor </font><div><font face="garamond, serif" size="2">Computer Science & Center for Behavioral Neuroscience</font></div><div><font face="garamond, serif" size="2">American University, Washington DC</font></div><div><font face="garamond, serif" size="2"><br></font></div><div><font><font face="garamond, serif" size="2">Homepage: <a href="https://sites.google.com/site/beixiao/" style="color:rgb(17,85,204)" target="_blank">https://sites.google.com/site/beixiao/</a></font><br></font></div></div><div><br></div><div><br></div><div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>