<div dir="ltr"><div class="gmail_quote"><br><div dir="ltr"><span id="m_4485821206833778519gmail-docs-internal-guid-45c70cfe-a7a1-8e06-9f0a-eead3bd2aabc"><p dir="ltr" style="line-height:1.295;margin-top:0pt;margin-bottom:8pt;text-align:center;border-bottom:0.75pt solid rgb(0,0,0);padding:0pt 0pt 1pt"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;vertical-align:baseline;white-space:pre-wrap">## CALL FOR PAPERS (CfP) ##</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:center"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">ICCV International Workshop on</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:center"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">‘</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;vertical-align:baseline;white-space:pre-wrap">Mutual Benefits of Cognitive and Computer Vision (MBCC)</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">’</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:center"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">29</span><span style="font-size:7.2pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:super;white-space:pre-wrap"> </span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">October 2017</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:center"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Venice, Italy</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:center"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">(<a href="https://sites.google.com/site/mbcc2017w/home" target="_blank">https://sites.google.com/<wbr>site/mbcc2017w/home</a>)</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:center"> </p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">_____________</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Aim and Scope</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">As researchers working at the intersection of biological and machine vision, we have noticed an increasing interest in both communities to understand and improve on each other’s insights. Recent advances in machine learning (especially deep learning) have led to unprecedented improvements in computer vision. These deep learning algorithms have revolutionized computer vision, and now rival humans at some narrowly defined tasks such as object recognition (e.g., the ImageNet Large Scale Visual Recognition Challenge). In spite of these advances, the existence of adversarial images (some of which have perturbations imperceptible to humans) and rather poor generalizability across datasets point out the flaws present in these networks. On the other hand, the human visual system remains highly efficient at solving real-world tasks and capable of solving many visual tasks.  We believe that the time is ripe to have extended discussions and interactions between researchers from both fields in order to steer future research in more fruitful directions. </span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(20,20,18);vertical-align:baseline;white-space:pre-wrap">This workshop will compare human vision to state-of-the-art machine perception methods, with specific emphasis on deep learning models and architectures. </span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"> </p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Our workshop will address many important questions. They include: 1) What are the representational differences between human and machine perception? 2) What makes human vision so effective? and 3) What can we learn from human vision research? Addressing these questions is not as difficult as previously thought due to technological advancements in both computational science and neuroscience. We can now measure human behavior precisely and collect huge amounts of neurophysiological data using EEG and fMRI. This places us in a unique position to compare state-of-the-art computer vision models and human behavioral/neural data, which was impossible to do a few years ago. However, this advantage also comes with its own set of problems: Which task/metric to use for comparison? What are the representational similarities? How different are the computations in a biological visual system when compared to an artificial vision system? How does human vision achieve invariance?</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"> </p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">This workshop is a great opportunity for researchers working on human and/or machine perception to come together and discuss plausible solutions to some of the aforementioned problems. </span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">______</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Topics </span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">for submission include but are not limited to:</span></p><ul style="margin-top:0pt;margin-bottom:0pt"><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">architectures for processing visual information in the human brain and computer vision (e.g. feedforward vs feedback, shallow vs deep networks, residual, recurrent, etc)</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">limitations of existing computer vision/deep learning systems compared to human vision</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">learning rules employed in computer vision and by the brain (e.g. unsupervised/semi-supervised learning, Hebb rule, Spike timing dependent plasticity)</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">representations/features in humans and computer vision</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">tasks/metrics to compare human and computer vision (e.g. eye fixation, reaction time, rapid categorization, visual search)</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">new benchmarks (e.g. datasets)</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">generalizability of machine representation to other tasks</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">new techniques to measure and analyze human psychophysics and neural signals</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">the problem on invariant learning</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:10pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";background-color:transparent;vertical-align:baseline;white-space:pre-wrap">conducting large-scale behavioral and physiological experiments (e.g., fMRI, cell recording)</span></p></li></ul><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">______________</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Invited Speakers</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">We have invited leading researchers from both Cognitive Science and Computer Vision to inspire discussions and collaborations.</span></p><ol style="margin-top:0pt;margin-bottom:0pt"><li dir="ltr" style="list-style-type:decimal;font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Michael Tarr</span><span style="font-size:12pt;background-color:transparent;font-weight:400;vertical-align:baseline;white-space:pre-wrap">, Carnegie Mellon University</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;vertical-align:baseline"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;background-color:transparent;vertical-align:baseline;white-space:pre-wrap">TBD</span></p></li></ol><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">___________________</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Submission Guidelines</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">We are inviting both full paper (5-8 pages) and extended abstract (2-4 pages) submissions to the workshop. Submitted papers must follow the ICCV paper format and guidelines (available on the ICCV 2017 webpage). All submissions will be handled via the CMT website:  <a href="https://cmt3.research.microsoft.com/MBCC2017/" target="_blank">https://cmt3.research.<wbr>microsoft.com/MBCC2017/</a></span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"> </p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-style:italic;vertical-align:baseline;white-space:pre-wrap">Full papers:</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap"> The submitted papers should have a maximum length of 8 pages, including figures and tables; additional pages must contain only cited references. The review will be </span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;vertical-align:baseline;white-space:pre-wrap">double-blind</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">. Please make sure all authors or references to authors are anonymized. Full paper submissions must not have been published before.</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"> </p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-style:italic;vertical-align:baseline;white-space:pre-wrap">Extended abstracts:</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap"> We invite submissions of extended abstracts</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">of ongoing or already published work as well as demos or prototype systems (ICCV format). Authors are given the opportunity to present their work to right audience. The review will be </span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;vertical-align:baseline;white-space:pre-wrap">single-blind</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">. </span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">______________</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Important Dates</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Full Paper submission:  August 1</span><span style="font-size:7.2pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:super;white-space:pre-wrap">st</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">, 2017</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Extended Abstract submission: August 5</span><span style="font-size:7.2pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:super;white-space:pre-wrap">th</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">, 2017</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Notification of acceptance:  August 15</span><span style="font-size:7.2pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:super;white-space:pre-wrap">th</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">, 2017</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Camera-ready paper due: September 30</span><span style="font-size:7.2pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:super;white-space:pre-wrap">th</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">, 2017</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Workshop:  October 29</span><span style="font-size:7.2pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:super;white-space:pre-wrap">th</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">, 2017 (Morning Session)</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">_____________________________</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Workshop Organizing Committee</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Ali Borji, University of Central Florida</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Pramod RT, Indian Institute of Science</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Elissa Aminoff, Fordham University</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap">Christopher Kanan, Rochester Institute of Technology</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">_______</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;text-align:justify"><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Contact</span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;font-weight:700;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap"><br class="m_4485821206833778519gmail-kix-line-break"></span><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap"><a href="mailto:mbccw.iccv2017@gmail.com" target="_blank">mbccw.iccv2017@gmail.com</a></span></p><div><span style="font-size:12pt;font-family:"Times New Roman";color:rgb(0,0,0);background-color:transparent;vertical-align:baseline;white-space:pre-wrap"><br></span></div></span><div><div class="m_4485821206833778519gmail_signature"><div dir="ltr"><div dir="ltr"><br></div></div></div></div>
</div>
</div><br></div>