[visionlist] Commands for Autonomous Vehicles Workshop @ ECCV2020
simon.vandenhende at kuleuven.be
Wed Apr 15 09:28:15 -04 2020
We are happy to invite you to join the first 'Commands for Autonomous Vehicles Workshop' (C4AV). The topic of the workshop is multi-modal learning - language and vision - in a practical setting. Participants are invited to share their work as a poster or oral presentation during the workshop (see call for papers - https://c4av-2020.github.io/).
We will be accepting papers in the following areas:
* Visual Dialog
* Multi-modal feature learning
* Object Referral/Visual Grounding
* Visual Question Answering
* Embodied Question Answering
* Zero-shot/Few-shot in multi-modal learning
* Applications in joint text/image understanding
Additionally, we want to invite you to compete in the C4AV challenge. Given a natural language command that expresses an action that needs to be taken by an autonomous vehicle, participants need to develop a model that identifies the referred object in the scene. The challenge is based on the Talk2Car dataset (EMNLP 2019), which extends the nuScenes dataset for autonomous driving with a visual grounding task. Top performing teams can win prizes and will be invited to present their work at the C4AV workshop at ECCV2020.
For more information visit https://c4av-2020.github.io/ . We are looking forward to seeing you on the leaderboard.
The C4AV team
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the visionlist