[visionlist] [Meetings] CVPR'23 Workshop on Event-based Vision. Call for contributions: papers and demos

Gallego, Guillermo guillermo.gallego at tu-berlin.de
Wed Jan 18 08:32:46 -04 2023


Dear colleagues,

We are excited to announce the *CVPR 2023 - 4th International Workshop on
Event-based Vision*. Help us spread the word about the workshop! We have a
great panel of speakers and we are also accepting contributions.* Accepted
papers will be published on IEEE.*


*Workshop website: https://tub-rip.github.io/eventvision2023/
<https://tub-rip.github.io/eventvision2023/> *
*Timeline:*

   - *Paper submission deadline: March 20, 2023. Submission website (CMT)
   <https://cmt3.research.microsoft.com/EVENTVISION2023>*
   - Demo abstract submission: March 20, 2023
   - Notification to authors: April 3, 2023
   - Camera-ready paper: April 8, 2023 (firm deadline by IEEE)
   - *Workshop day: June 19, 2023. 2nd day of CVPR. Full day workshop.*

*Objectives:*

This workshop is dedicated to event-based cameras, smart cameras, and
algorithms processing data from these sensors. Event-based cameras are
bio-inspired sensors with the key advantages of microsecond temporal
resolution, low latency, very high dynamic range, and low power
consumption. Because of these advantages, event-based cameras open
frontiers that are unthinkable with standard frame-based cameras (which
have been the main sensing technology for the past 60 years). These
revolutionary sensors enable the design of a new class of algorithms to
track a baseball in the moonlight, build a flying robot with the agility of
a bee, and perform structure from motion in challenging lighting conditions
and at remarkable speeds. These sensors became commercially available in
2008 and are slowly being adopted in computer vision and robotics. In
recent years they have received attention from large companies, e.g., the
event-sensor company Prophesee collaborated with Intel and Bosch on a high
spatial resolution sensor, Samsung announced mass production of a sensor to
be used on hand-held devices, and they have been used in various
applications on neuromorphic chips such as IBM’s TrueNorth and Intel’s
Loihi. The workshop also considers novel vision sensors, such as pixel
processor arrays (PPAs), which perform massively parallel processing near
the image plane. Because early vision computations are carried out on-sensor,
the resulting systems have high speed and low-power consumption, enabling
new embedded vision applications in areas such as robotics, AR/VR,
automotive, gaming, surveillance, etc. This workshop will cover the sensing
hardware, as well as the processing and learning methods needed to take
advantage of the above-mentioned novel cameras.

*Call for Papers and Demos:*
Research papers and demos are solicited in, but not limited to, the
following topics:

   - Event-based / neuromorphic vision.
   - Algorithms: motion estimation, visual odometry, SLAM, 3D
   reconstruction, image intensity reconstruction, optical flow estimation,
   recognition, feature/object detection, visual tracking, calibration, sensor
   fusion (video synthesis, visual-inertial odometry, etc.).
   - Model-based, embedded, or learning-based approaches.
   - Event-based signal processing, representation, control, bandwidth
   control.
   - Event-based active vision, event-based sensorimotor integration.
   - Event camera datasets and/or simulators.
   - Applications in: robotics (navigation, manipulation, drones…),
   automotive, IoT, AR/VR, space science, inspection, surveillance, crowd
   counting, physics, biology.
   - Biologically-inspired vision and smart cameras.
   - Near-focal plane processing, such as pixel processor arrays (PPAs).
   - Novel hardware (cameras, neuromorphic processors, etc.) and/or
   software platforms, such as fully event-based systems (end-to-end).
   - New trends and challenges in event-based and/or biologically-inspired
   vision (SNNs, etc.).
   - Event-based vision for computational photography.
   - A longer list of related topics is available in the table of content
   of the List of Event-based Vision Resources
   <https://github.com/uzh-rpg/event-based_vision_resources>

*Courtesy presentations:*
We also invite courtesy presentations of papers relevant to the workshop
that are accepted at CVPR main conference or at other peer-reviewed
conferences or journals. These presentations provide visibility to your
work and help build a community around the topics of the workshop. These
contributions will be checked for relevance to the workshop, but will not
undergo a complete review, and will not be published in the workshop
proceedings. Please contact the organizers to make arrangements to showcase
your work at the workshop.
* Author guidelines*:
Research papers and demos are solicited in, but not limited to, the topics
listed above. Paper submissions must adhere to the CVPR 2023 paper
submission style, format and length restrictions. See the author guidelines
<https://cvpr2023.thecvf.com/Conferences/2023/AuthorGuidelines> and template
<https://media.icml.cc/Conferences/CVPR2023/cvpr2023-author_kit-v1_1-1.zip>
provided by the CVPR 2023 main conference. See also the policy of Dual/Double
Submissions of concurrently-reviewed conferences, such as ICCV
<https://iccv2023.thecvf.com/policies-361500-2-20-15.php>. Authors may want
to limit the submission to four pages (excluding references) if that is
their case. For demo abstract submission, authors are encouraged to submit
an abstract of up to 2 pages.

A double blind peer-review process of the submissions received is carried
out via CMT. Accepted papers will be published open access through the
Computer Vision Foundation (CVF) (see examples from CVPR Workshop 2019
<https://openaccess.thecvf.com/CVPR2019_workshops/CVPR2019_EventVision> and
2021 <https://openaccess.thecvf.com/CVPR2021_workshops/EventVision>). For
the accepted papers we encourage authors to write a paragraph about ethical
considerations and impact of their work.



* Organizers: - Guillermo Gallego <http://www.guillermogallego.es>, TU
Berlin, ECDF, SCIoI (Germany)- Davide Scaramuzza
<http://rpg.ifi.uzh.ch/people_scaramuzza.html>, University of Zurich
(Switzerland)- Kostas Daniilidis <https://www.cis.upenn.edu/~kostas>,
University of Pennsylvania (USA)- Cornelia Fermüller
<http://users.umiacs.umd.edu/~fer>, University of Maryland (USA)- Davide
Migliore <https://www.linkedin.com/in/davidemigliore>, Prophesee
<https://www.prophesee.ai/> (France) *
Looking forward to meeting you at the workshop!

Best regards,
The EventVision workshop organizers.

--
Prof. Dr. Guillermo Gallego
Robotic Interactive Perception
TU Berlin, Faculty IV Electrical Engineering and Computer Science
Marchstrasse 23, Sekr. MAR 5-5, 10587 Berlin, Germany
e-mail: guillermo.gallego at tu-berlin.de
www.guillermogallego.es
Office phone: +49 30 314 70145
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20230118/dbc32e4b/attachment-0001.html>


More information about the visionlist mailing list