<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<p> </p>
<div class="moz-text-html" lang="x-unicode">
<p>[Apologies if you receive multiple copies]</p>
<p><font size="4"><b>C</b></font><font size="4"><b>all for Papers
– ILR+G@ICCV2025</b></font><br>
<br>
<font size="4"><b>7th Instance-Level Recognition and Generation
Workshop </b></font><br>
International Conference on Computer Vision, <a
href="https://iccv.thecvf.com/">ICCV 2025</a><br>
Honolulu, Hawaii, October 19-20 2025<br>
<a class="moz-txt-link-freetext"
href="https://ilr-workshop.github.io/ICCVW2025/">https://ilr-workshop.github.io/ICCVW2025/<br>
</a><br>
The <b>Instance-Level Recognition and Generation (ILR+G)</b>
Workshop aims to explore computer vision tasks focusing on
specific instances rather than broad categories, covering both
recognition (instance-level recognition - <b>ILR</b>) and
generation (instance-level generation - <b>ILG</b>). Unlike
category-level recognition, which assigns broad class labels
based on semantics (e.g., “a painting”), ILR focuses on <b>distinguishing
specific instances</b>, assigning class labels that refer to
particular objects or events (e.g., “Blue Poles” by Jackson
Pollock), enabling recognition, retrieval, and tracking at the
finest granularity. This year, the workshop also covers ILG,
also known as personalized generation, which involves
synthesizing new media that <b>preserve the visual identity</b>
of a particular instance while varying context or appearance,
often guided by text. We encourage the exploration of synergies
between ILR and ILG, such as using recognition as a foundation
for instance-conditioned generation, or leveraging generative
models to boost ILR in low-data or open-set scenarios.<br>
</p>
<p><b>Relevant topics</b> include (but are not limited to):<br>
</p>
<ul>
<li>instance-level object classification, detection,
segmentation, and pose estimation</li>
<li>particular object (instance-level) and event retrieval</li>
<li>personalized (instance-level) image and video generation</li>
<li>cross-modal/multi-modal recognition at instance-level</li>
<li>other ILR tasks such as image matching, place recognition,
video tracking, moment retrieval</li>
<li>other ILR+G applications or challenges</li>
<li>ILR+G datasets and benchmarking</li>
</ul>
<p> <br>
<font size="4"><b>Submission details</b></font><br>
We call for novel and unpublished work in the format of long
papers (up to 8 pages) and short papers (up to 4 pages). Papers
should follow the ICCV proceedings style and will be reviewed in
a double-blind fashion. Submissions may be made to either of two
tracks: (1) <i>in-proceedings papers</i> – long papers that
will be published in the conference proceedings, and (2) <i>out-of-proceedings
papers</i> – long or short papers that will not be included in
the proceedings. <b>Note that according to the ICCV guidelines,
papers longer than four pages are considered published, even
if they do not appear in the proceedings</b>. Selected long
papers from both tracks will be invited for oral presentations;
all accepted papers will be presented as posters.<br>
<br>
<font size="4"><b>Important dates</b></font><br>
<i>in-proceedings papers<br>
</i></p>
<ul>
<li>submission deadline: <b>June 7, 2025</b></li>
<li>notification of acceptance: <b>June 21, 2025</b></li>
<li>camera-ready papers due: <b>June 27, 2025</b></li>
</ul>
<p><i>out-of-proceedings papers</i><br>
</p>
<ul>
<li>submission deadline: <b>June 30, 2025</b></li>
<li>notification of acceptance: <b>July 18, 2025</b></li>
<li>camera-ready papers due: <b>July 25, 2025<br>
<br>
</b></li>
</ul>
<font size="4"><b>Organizing committee</b></font><br>
Andre Araujo, Google DeepMind<br>
Bingyi Cao, Google DeepMind<br>
Kaifeng Chen, Google DeepMind<br>
Ondrej Chum, Czech Technical University in Prague<br>
Noa Garcia, Osaka University<br>
Guangxing Han, Google DeepMind<br>
Giorgos Kordopatis-Zilos, Czech Technical University in Prague<br>
Giorgos Tolias, Czech Technical University in Prague<br>
Hao Yang, Amazon<br>
Nikolaos-Antonios Ypsilantis, Czech Technical University in Prague<br>
Xu Zhang, Amazon<br>
<p></p>
</div>
</body>
</html>