[visionlist] [CfP] 7th Instance-Level Recognition and Generation Workshop @ ICCV2025
Giorgos Kordopatis-Zilos
kordogeo at fel.cvut.cz
Wed May 7 02:01:17 -05 2025
[Apologies if you receive multiple copies]
*C**all for Papers – ILR+G at ICCV2025*
*7th Instance-Level Recognition and Generation Workshop *
International Conference on Computer Vision, ICCV 2025
<https://iccv.thecvf.com/>
Honolulu, Hawaii, October 19-20 2025
https://ilr-workshop.github.io/ICCVW2025/
The *Instance-Level Recognition and Generation (ILR+G)* Workshop aims to
explore computer vision tasks focusing on specific instances rather than
broad categories, covering both recognition (instance-level recognition
- *ILR*) and generation (instance-level generation - *ILG*). Unlike
category-level recognition, which assigns broad class labels based on
semantics (e.g., “a painting”), ILR focuses on *distinguishing specific
instances*, assigning class labels that refer to particular objects or
events (e.g., “Blue Poles” by Jackson Pollock), enabling recognition,
retrieval, and tracking at the finest granularity. This year, the
workshop also covers ILG, also known as personalized generation, which
involves synthesizing new media that *preserve the visual identity* of a
particular instance while varying context or appearance, often guided by
text. We encourage the exploration of synergies between ILR and ILG,
such as using recognition as a foundation for instance-conditioned
generation, or leveraging generative models to boost ILR in low-data or
open-set scenarios.
*Relevant topics* include (but are not limited to):
* instance-level object classification, detection, segmentation, and
pose estimation
* particular object (instance-level) and event retrieval
* personalized (instance-level) image and video generation
* cross-modal/multi-modal recognition at instance-level
* other ILR tasks such as image matching, place recognition, video
tracking, moment retrieval
* other ILR+G applications or challenges
* ILR+G datasets and benchmarking
*Submission details*
We call for novel and unpublished work in the format of long papers (up
to 8 pages) and short papers (up to 4 pages). Papers should follow the
ICCV proceedings style and will be reviewed in a double-blind fashion.
Submissions may be made to either of two tracks: (1) /in-proceedings
papers/ – long papers that will be published in the conference
proceedings, and (2) /out-of-proceedings papers/ – long or short papers
that will not be included in the proceedings. *Note that according to
the ICCV guidelines, papers longer than four pages are considered
published, even if they do not appear in the proceedings*. Selected long
papers from both tracks will be invited for oral presentations; all
accepted papers will be presented as posters.
*Important dates*
/in-proceedings papers
/
* submission deadline: *June 7, 2025*
* notification of acceptance: *June 21, 2025*
* camera-ready papers due: *June 27, 2025*
/out-of-proceedings papers/
* submission deadline: *June 30, 2025*
* notification of acceptance: *July 18, 2025*
* camera-ready papers due: *July 25, 2025
*
*Organizing committee*
Andre Araujo, Google DeepMind
Bingyi Cao, Google DeepMind
Kaifeng Chen, Google DeepMind
Ondrej Chum, Czech Technical University in Prague
Noa Garcia, Osaka University
Guangxing Han, Google DeepMind
Giorgos Kordopatis-Zilos, Czech Technical University in Prague
Giorgos Tolias, Czech Technical University in Prague
Hao Yang, Amazon
Nikolaos-Antonios Ypsilantis, Czech Technical University in Prague
Xu Zhang, Amazon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20250507/fb0cab45/attachment.html>
More information about the visionlist
mailing list