<div dir="auto"><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">We apologise in advance for multiple reposts/copies.</div><div><div id="m_5722508361521917654divRplyFwdMsg" dir="ltr"> </div>
<div dir="ltr">
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">
<b style="font-family:inherit;font-size:inherit;font-style:inherit;font-variant-ligatures:inherit;font-variant-caps:inherit">CALL FOR PAPERS <span style="color:rgb(0,0,0);display:inline!important;background-color:rgb(255,255,255)">3D-DLAD-v3 2021</span></b><br>
</div>
<div>
<div dir="ltr">
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">
<div><br>
</div>
<div><b>3D-DLAD-v3 (third 3D Deep Learning for Autonomous Driving) </b><span style="color:rgb(0,0,0);background-color:rgb(255,255,255);display:inline!important"><b>workshop</b> </span>is the 6th workshop organized as part of DLAD workshop series. It is organized
as a part of the flagship automotive conference Intelligent Vehicles <a href="https://2021.ieee-iv.org/" title="https://2021.ieee-iv.org/" target="_blank" rel="noreferrer">
https://2021.ieee-iv.org/</a>. </div>
<div><br>
</div>
<div>Deep Learning has become a de-facto tool in Computer Vision and 3D processing with boosted performance and accuracy for diverse tasks such as object classification, detection, optical flow estimation, motion segmentation, mapping, etc. Lidar sensors are
playing an important role in the development of Autonomous Vehicles as they overcome some of the many drawbacks of a camera based system, such as degraded performance under changes in illumination and weather conditions. In addition, Lidar sensors capture
a wider field of view, and directly obtain 3D information. This is essential to assure the security of the different agents and obstacles in the scene. It is a computationally challenging task to process more than 100k points per scan in realtime within modern
perception pipelines. Following the said motivations, finally to address the growing interest in deep representation learning for lidar point-clouds, in both academic as well as industrial research domains for autonomous driving, we invite submissions to the
current workshop to disseminate the latest research.</div>
<div><br>
</div>
<div>We are soliciting contributions in deep learning on 3D data applied to autonomous driving in (but not limited to) the following topics. Please feel free to contact us if there are any questions.
</div>
<div><br>
</div>
<div><b>TOPICS</b> : </div>
<div>Deep Learning for Lidar based clustering, road extraction object detection and/or tracking.</div>
<div>Deep Learning for Radar pointclouds</div>
<div>Deep Learning for TOF sensor-based driver monitoring</div>
<div>New lidar based technologies and sensors.</div>
<div>Deep Learning for Lidar localization, VSLAM, meshing, pointcloud inpainting</div>
<div>Deep Learning for Odometry and Map/HDmaps generation with Lidar cues.</div>
<div>Deep fusion of automotive sensors (Lidar, Camera, Radar).</div>
<div>Design of datasets and active learning methods for pointclouds</div>
<div>Synthetic Lidar sensors & Simulation-to-real transfer learning</div>
<div>Cross-modal feature extraction for Sparse output sensors like Lidar.</div>
<div>Generalization techniques for different Lidar sensors, multi-Lidar setup and point densities.</div>
<div>Lidar based maps, HDmaps, prior maps, occupancy grids</div>
<div>Real-time implementation on embedded platforms (Efficient design & hardware accelerators).</div>
<div>Challenges of deployment in a commercial system (Functional safety & High accuracy).</div>
<div>End to end learning of driving with Lidar information (Single model & modular end-to-end)</div>
<div>Deep learning for dense Lidar point cloud generation from sparse Lidars and other modalities</div>
<div><br>
</div>
<div><br>
</div>
<div><b>Workshop link </b>: <a href="https://sites.google.com/view/3d-dlad-v3-iv2021/home" title="https://sites.google.com/view/3d-dlad-v3-iv2021/home" target="_blank" rel="noreferrer">
https://sites.google.com/view/3d-dlad-v3-iv2021/home</a></div>
<div><b>Submission instructions </b>: <a href="https://2021.ieee-iv.org/information-for-authors/" title="https://2021.ieee-iv.org/information-for-authors/" target="_blank" rel="noreferrer">
https://2021.ieee-iv.org/information-for-authors/</a></div>
<div><br>
</div>
<div><b>Location</b> : Nagoya, Japan</div>
<div><b>Submission</b> : 15th March 2021 (firm deadline, no extension)</div>
<div><b>Acceptance Notification</b> : 25th April 2021 </div>
<div><b>Workshop Date</b> : 11th July 2021</div>
<div><b>Contact</b>: ravi.kiran@navya.tech and <a href="mailto:senthil.yogamani@valeo.com" target="_blank" rel="noreferrer">senthil.yogamani@valeo.com</a></div>
<div><br>
</div>
<div><b>Workshop Organizers</b>: </div>
<div>B Ravi Kiran, Navya, France</div>
<div>Senthil Yogamani, Valeo Vision Systems, Ireland</div>
<div>Victor Vaquero, Research Engineer, IVEX.ai</div>
<div>Patrick Perez, Valeo.AI, France </div>
<div>Bharanidhar Duraisamy, Daimler, Germany</div>
<div>Dan Levi, GM, Israel</div>
<div>Abhinav Valada, University of Freiburg, Germany</div>
<div>Lars Kunze, Oxford University, UK</div>
<div>Markus Enzweiler, Daimler, Germany</div>
<div>Ahmad El Sallab, Valeo AI Research, Egypt</div>
<div>Sumanth Chennupati, Wyze Labs, USA</div>
<div>Stefan Milz, Spleenlab.ai , Germany</div>
<div>Hazem Rashed, Valeo AI Research, Egypt</div>
<div>Jean-Emmanuel Deschaud, MINES ParisTech, France</div>
Kuo-Chin Lien, Appen USA<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">
Naveen Shankar Nagaraja, BMW Group, Munich<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255)">
<br>
</div>
</div>
</div>
</div>
</div>
</div></div>