<div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div><div><div>Dear colleagues,</div><div><br></div><div>We are excited to announce the <b><span>CVPR</span> 2023 - 4th International <span>Workshop</span> <span>on</span> <span>Event</span>-<span>based</span> <span>Vision</span></b>. Help us spread the word about the <span>workshop</span>! We have a great panel of speakers and we are also accepting <span>contributions</span>.<b> Accepted papers will be published <span>on</span> IEEE.</b><br></div><div>
                  </div><div><p><b><font size="4"><b><span>Workshop</span> website</b>: <a href="https://tub-rip.github.io/eventvision2023/" target="_blank">https://tub-rip.github.io/eventvision2023/</a> </font><br></b></p><b>Timeline:</b><ul><li><b>Paper submission deadline: <b><span style="color:red">March 20, 2023</span></b>.  <b><a href="https://cmt3.research.microsoft.com/EVENTVISION2023" target="_blank">Submission website (CMT)</a></b></b></li><li>Demo abstract submission: March 20, 2023</li><li>Notification to authors: April 3, 2023</li><li>Camera-ready paper: April 8, 2023 (firm deadline by IEEE)</li><li><b><span>Workshop</span> day: <b>June 19, 2023. 2nd day of <span>CVPR</span></b>. Full day <span>workshop</span>.</b></li></ul></div><div><b>Objectives:</b><br></div></div></div><div><div><div><p>This <span>workshop</span> is dedicated to <span>event</span>-<span>based</span> cameras, smart cameras, and algorithms processing data from these sensors. <span>Event</span>-<span>based</span>
 cameras are bio-inspired sensors with the key advantages of microsecond
 temporal resolution, low latency, very high dynamic range, and low 
power consumption. Because of these advantages, <span>event</span>-<span>based</span> cameras open frontiers that are unthinkable with standard frame-<span>based</span> cameras (which have been the main sensing technology <span>for</span>
 the past 60 years). These revolutionary sensors enable the design of a 
new class of algorithms to track a baseball in the moonlight, build a 
flying robot with the agility of a bee, and perform structure from 
motion in challenging lighting conditions and at remarkable speeds. 
These sensors became commercially available in 2008 and are slowly being
 adopted in computer <span>vision</span> and robotics. In recent years they have received attention from large companies, e.g., the <span>event</span>-sensor company Prophesee collaborated with Intel and Bosch <span>on</span> a high spatial resolution sensor, Samsung announced mass production of a sensor to be used <span>on</span> hand-held devices, and they have been used in various applications <span>on</span> neuromorphic chips such as IBM’s TrueNorth and Intel’s Loihi. The <span>workshop</span> also considers novel <span>vision</span>
 sensors, such as pixel processor arrays (PPAs), which perform massively
 parallel processing near the image plane. Because early <span>vision</span> computations are carried out <span>on</span>-sensor, the resulting systems have high speed and low-power consumption, enabling new embedded <span>vision</span> applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc. This <span>workshop</span>
 will cover the sensing hardware, as well as the processing and learning
 methods needed to take advantage of the above-mentioned novel cameras.</p>
                  <p><b><span>Call</span> <span>for</span> Papers and Demos:</b><br>
                    Research papers and demos are solicited in, but not
                    limited to, the following topics:
                  <br></p><ul><li><span>Event</span>-<span>based</span> / neuromorphic <span>vision</span>.</li><li>Algorithms: motion estimation, visual odometry, SLAM, 3D 
reconstruction, image intensity reconstruction, optical flow estimation,
 recognition, feature/object detection, visual tracking, calibration, 
sensor fusion (video synthesis, visual-inertial odometry, etc.).</li><li>Model-<span>based</span>, embedded, or learning-<span>based</span> approaches.</li><li><span>Event</span>-<span>based</span> signal processing, representation, control, bandwidth control.</li><li><span>Event</span>-<span>based</span> active <span>vision</span>, <span>event</span>-<span>based</span> sensorimotor integration.</li><li><span>Event</span> camera datasets and/or simulators.</li><li>Applications in: robotics (navigation, manipulation, drones…), 
automotive, IoT, AR/VR, space science, inspection, surveillance, crowd 
counting, physics, biology.</li><li>Biologically-inspired <span>vision</span> and smart cameras.</li><li>Near-focal plane processing, such as pixel processor arrays (PPAs).</li><li>Novel hardware (cameras, neuromorphic processors, etc.) and/or 
software platforms, such as fully <span>event</span>-<span>based</span> systems (end-to-end).</li><li>New trends and challenges in <span>event</span>-<span>based</span> and/or biologically-inspired <span>vision</span> (SNNs, etc.).</li><li><span>Event</span>-<span>based</span> <span>vision</span> <span>for</span> computational photography.</li><li>A longer list of related topics is available in the table of content of the <a href="https://github.com/uzh-rpg/event-based_vision_resources" target="_blank">List of <span>Event</span>-<span>based</span> <span>Vision</span> Resources</a></li></ul>
                  
                  <p><b>Courtesy presentations:</b><br>We also invite courtesy presentations of papers relevant to the <span>workshop</span> that are accepted at <span>CVPR</span>
 main conference or at other peer-reviewed conferences or journals. 
These presentations provide visibility to your work and help build a 
community around the topics of the <span>workshop</span>. These <span>contributions</span> will be checked <span>for</span> relevance to the <span>workshop</span>, but will not undergo a complete review, and will not be published in the <span>workshop</span> proceedings. Please contact the organizers to make arrangements to showcase your work at the <span>workshop</span>. <br> </p>
                  </div></div><b>
                    Author guidelines</b>: <br><div style="text-align:justify">
  Research papers and demos are solicited in, but not limited to, the topics listed above.
  Paper submissions must adhere to the <span>CVPR</span> 2023 paper submission style, format and length restrictions. 
  See the <a href="https://cvpr2023.thecvf.com/Conferences/2023/AuthorGuidelines" target="_blank">author guidelines</a> and <a href="https://media.icml.cc/Conferences/CVPR2023/cvpr2023-author_kit-v1_1-1.zip" target="_blank">template</a> provided by the <span>CVPR</span> 2023 main conference.
  See also the policy of <a href="https://iccv2023.thecvf.com/policies-361500-2-20-15.php" target="_blank">Dual/Double Submissions of concurrently-reviewed conferences, such as ICCV</a>. 
  Authors may want to limit the submission to four pages (excluding references) if that is their case. <span>For</span> demo abstract submission, authors are
                      encouraged to submit an abstract of up to 2 pages.</div><div style="text-align:justify"><br>
</div>

<div style="text-align:justify">A double blind peer-review process of the submissions received is carried out via CMT.
  Accepted papers will be published open access through the Computer <span>Vision</span> Foundation (CVF) (see <a href="https://openaccess.thecvf.com/CVPR2019_workshops/CVPR2019_EventVision" target="_blank">examples from <span>CVPR</span> <span>Workshop</span> 2019</a> <a href="https://openaccess.thecvf.com/CVPR2021_workshops/EventVision" target="_blank">and 2021</a>).
  <span>For</span> the accepted papers we encourage authors to write a paragraph about ethical considerations and impact of their work.</div><div>
                </div></div><div><div>
                <div>
                  <div>
                    <div>
                      <div dir="ltr">
                        <div dir="ltr">
                          <div>
                            <div dir="ltr">
                              <div>
                                <div dir="ltr">
                                  <div>
                                    <div dir="ltr">
                                      <div>
                                        <div dir="ltr">
                                          <div>
                                            <div dir="ltr">
                                              <div>
                                                <div><b><b><b>
                                                        <p><b>Organizers:</b></p>
                                                        <div title="Page 1">
                                                          <div>
                                                          <div>
                                                          <ul><li><a href="http://www.guillermogallego.es" target="_blank">Guillermo Gallego</a>, TU Berlin, ECDF, SCIoI (Germany)</li><li><a href="http://rpg.ifi.uzh.ch/people_scaramuzza.html" target="_blank">Davide Scaramuzza</a>, University of Zurich (Switzerland)</li><li><a href="https://www.cis.upenn.edu/~kostas" target="_blank">Kostas Daniilidis</a>, University of Pennsylvania (USA)</li><li><a href="http://users.umiacs.umd.edu/~fer" target="_blank">Cornelia Fermüller</a>, University of Maryland (USA)</li><li><a href="https://www.linkedin.com/in/davidemigliore" target="_blank">Davide Migliore</a>, <a href="https://www.prophesee.ai/" target="_blank">Prophesee</a> (France)</li></ul>
                                                          </div>
                                                          </div>
                                                        </div>
                                                      </b></b></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><div>Looking forward to meeting you at the <span>workshop</span>!</div><div><br></div><div>Best regards,</div><div>The EventVision <span>workshop</span> organizers.</div><br clear="all"><div><div dir="ltr" data-smartmail="gmail_signature"><div dir="ltr">--<br><div>Prof. Dr. Guillermo Gallego</div><div>Robotic Interactive Perception<br></div>TU Berlin, Faculty IV Electrical Engineering and Computer Science<br>Marchstrasse 23, Sekr. MAR 5-5, 10587 Berlin, Germany<br>e-mail: <a href="mailto:guillermo.gallego@tu-berlin.de" target="_blank">guillermo.gallego@tu-berlin.de</a><br><a href="http://www.guillermogallego.es" target="_blank">www.guillermogallego.es</a><br>Office phone: +49 30 314 <span><span>70145</span></span></div></div></div></div>
</div></div>