<div dir="ltr"><p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Apologies for cross-posting</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">*******************************</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">* <b><u><a href="http://www.google.com/url?q=http%3A%2F%2Fwww.shrec.net%2F&sa=D&sntz=1&usg=AFQjCNG1JtGV4ZLdGfDXOvK-1Dv6PZEejQ" target="_blank" style="color:blue">SHREC 2022</a></u> Track: Sketch-Based 3D Shape Retrieval in
the Wild</b></span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">* Website:</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"><a href="https://sites.google.com/site/firmamentqj/sbsrw" style="color:blue">https://sites.google.com/site/firmamentqj/sbsrw</a></span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">* Organizers:</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">- Jie Qin, Nanjing University of Aeronautics
and Astronautics, Nanjing, China</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">- Shuaihang Yuan, New York University, New
York, USA</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">- Jiaxin Chen, Beihang University, Beijing,
China</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">- Boulbaba Ben Amor - IMT Nord Europe, France
& Inception Institute of Artificial Intelligence, UAE</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">- Yi Fang, NYU Abu Dhabi, UAE and NYU Tandon,
USA</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">============================ Objective
=================================</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The objective of this track is to evaluate
the performance of different sketch-based 3D shape retrieval algorithms based
on a 2D free-hand sketch dataset and a 3D shape dataset in a more realistic and
challenging setting.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">============================ Introduction
================================</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Sketch-based 3D shape retrieval (SBSR) [1-3]
has drawn a significant amount of attention, owing to the succinctness of
free-hand sketches and the increasing demands from real applications. It is an
intuitive yet challenging task due to the large discrepancy between the 2D and
3D modalities.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">To foster the research on this important
problem, several tracks focusing on related tasks have been held in the past
SHREC challenges, such as [4-7]. However, the datasets they adopted are not
quite realistic, and thus cannot well simulate real application scenarios. To
mimic the real-world scenario, the dataset is expected to meet the following
requirements. First, there should exist a large domain gap between the two
modalities,<i> i.e.</i>, sketches and 3D shapes. However, current datasets
unintentionally narrow this gap by using projection-based/multi-view
representations for 3D shapes (<i>i.e.</i>, a 3D shape is manually rendered
into a set of 2D images). In this way, the large 2D-3D domain discrepancy is
unnecessarily reduced to the 2D-2D one. Second, the data themselves from both
modalities should be realistic, mimicking the real-world scenario. More
specifically, we need a full variety of sketches per category as real users
possess various drawing skills. As for 3D shapes, we need to frame 3D models
with real-world settings more than create them artificially. However, human
sketches on existing datasets tend to be semi-photorealistic drawn by experts
and the number of sketches per category is quite limited; in the meantime, most
current 3D datasets used in SBSR are composed of CAD models, losing certain
details compared to the models scanned from real objects.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">To circumvent the above limitations, this
track proposes a more realistic and challenging setting for SBSR. On the one
hand, we adopt highly abstract 2D sketches drawn by amateurs, and at the same
time, bypass the projection-based representations for 3D shapes by directly
adopting and representing 3D point cloud data. On the other hand, we adopt a
full variety of free-hand sketches with various samples per category, as well
as a collection of realistic point cloud data framed from indoor objects.
Therefore, we name this track ‘sketch-based 3D shape retrieval in the wild’
(SBSRW). As stated above, the term ‘in the wild’ is reflected in two
perspectives: 1) The domain gap between the two modalities is realistic as we
adopt sketches of high abstraction levels and 3D point cloud data. 2) The data
themselves mimic the real-world setting as we adopt a full variety of sketches
(3,000 per category) and 3D point clouds captured from real objects.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">======================= Tasks ===========================</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">We proposed two tasks to evaluate the
performance of different SBSR algorithms,<i> i.e.</i>, sketch-based 3D CAD
model (point cloud data) retrieval and sketch-based realistic scanned model
(point cloud data) retrieval.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">For <b>the first task</b>, we select
around 2,500 3D CAD models from 47 classes on ModelNet40/ShapeNet and 3,000
sketches from each corresponding category (141,000 sketch samples in total) on
QuickDraw. We randomly select 2,500 sketches from each class for training, and
the remaining 500 sketches per class are used for testing/query. All the 3D
point clouds as a whole are utilized as the target/gallery dataset to evaluate
the retrieval performance. Participants are asked to submit the results on the
test datasets.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">For <b>the second task</b>, we select
2,000 realistic 3D models from 11 classes on ScanObjectNN and 3,000 sketches
per class (33,000 sketch samples in total) from QuickDraw. Similar to the first
task, we randomly select 2,500 sketches from each class for training, and the
remaining 500 sketches per class are used for testing/query. All the 3D point
clouds as a whole are utilized as the target/gallery dataset to evaluate the
retrieval performance. Participants are asked to submit the results on the test
datasets.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">======================= Evaluation Method
===========================</span></p>

<p class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">For a comprehensive evaluation of different algorithms, we
employ the following widely-adopted performance metrics in SBSR, including
nearest neighbor (NN), first tier (FT), second tier (ST), E-measure (E),
discounted cumulated gain (DCG), mean average precision (mAP), and
precision-recall (PR) curve. We will provide the source code to compute all the
aforementioned metrics.</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">======================= Procedure ===========================</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The following list is a step-by-step
description of the activities:</span></p>

<ul style="margin-top:0cm;margin-bottom:0cm" type="square">
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The
     participants register the track by sending an email to <u><a href="mailto:qinjiebuaa@gmail.com" target="_blank" style="color:blue">qinjiebuaa@gmail.com</a></u>
     with 'SHREC 2022 - SBSRW Track Registration' as the title and indicating
     which task they are interested in.</span></li>
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The
     organizers release the dataset via their website.</span></li>
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The
     participants submit the distance matrices for the test sets, with one-page
     descriptions of their methods.</span></li>
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Evaluation
     is automatically performed based on the submitted matrices, by computing
     all the performance metrics via the official source code.</span></li>
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The
     organizers announce the results and the final rank list of all the
     participants.</span></li>
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The
     track results are combined into a joint paper, which is subject to a
     two-stage peer review process. Accepted papers will be published in
     Computers & Graphics.</span></li>
 <li class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">The
     description of the track and the results will be presented at Eurographics
     2022 Symposium on 3D Object Retrieval (1-2 September 2022).</span></li>
</ul>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">======================= Schedule ===========================</span></p>

<ul style="margin-top:0cm;margin-bottom:0cm" type="square">
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">January
     1: Call for participation.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">January
     15: Release a few sample sketches and 3D models.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">January
     22: Registration deadline.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">January
     29: Release the training set for the first task.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">February
     5: Release the training set for the second task.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">February
     28: Submission deadline for the first task.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">March
     4: Submission deadline for the second task.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">March
     8: Release the final results for both tasks; jointly write the track
     report.</span></li>
 <li class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">March
     15: Submission deadline for the joint paper for C&G review.</span></li>
</ul>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">*******************************</span><span lang="EN-US"></span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">We look forward to your participation!</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Best Regards,</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Jie Qin</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif"> </span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Professor</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">College of Computer Science and Technology</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Nanjing University of Aeronautics and
Astronautics (NUAA)</span></p>

<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:宋体"><span lang="EN-US" style="font-family:Arial,sans-serif">Nanjing, Jiangsu 211106, China</span></p></div>