[visionlist] Call For Participation: SHREC 2022 Track - Sketch-Based 3D Shape Retrieval in the Wild

Jie Qin qinjiebuaa at gmail.com
Sun Jan 9 03:15:58 -04 2022


Apologies for cross-posting

*******************************



* *SHREC 2022
<http://www.google.com/url?q=http%3A%2F%2Fwww.shrec.net%2F&sa=D&sntz=1&usg=AFQjCNG1JtGV4ZLdGfDXOvK-1Dv6PZEejQ>
Track: Sketch-Based 3D Shape Retrieval in the Wild*



* Website:

https://sites.google.com/site/firmamentqj/sbsrw



* Organizers:

- Jie Qin, Nanjing University of Aeronautics and Astronautics, Nanjing,
China

- Shuaihang Yuan, New York University, New York, USA

- Jiaxin Chen, Beihang University, Beijing, China

- Boulbaba Ben Amor - IMT Nord Europe, France & Inception Institute of
Artificial Intelligence, UAE

- Yi Fang, NYU Abu Dhabi, UAE and NYU Tandon, USA



============================ Objective =================================

The objective of this track is to evaluate the performance of different
sketch-based 3D shape retrieval algorithms based on a 2D free-hand sketch
dataset and a 3D shape dataset in a more realistic and challenging setting.



============================ Introduction ================================

Sketch-based 3D shape retrieval (SBSR) [1-3] has drawn a significant amount
of attention, owing to the succinctness of free-hand sketches and the
increasing demands from real applications. It is an intuitive yet
challenging task due to the large discrepancy between the 2D and 3D
modalities.



To foster the research on this important problem, several tracks focusing
on related tasks have been held in the past SHREC challenges, such as
[4-7]. However, the datasets they adopted are not quite realistic, and thus
cannot well simulate real application scenarios. To mimic the real-world
scenario, the dataset is expected to meet the following requirements.
First, there should exist a large domain gap between the two modalities,
* i.e.*, sketches and 3D shapes. However, current datasets unintentionally
narrow this gap by using projection-based/multi-view representations for 3D
shapes (*i.e.*, a 3D shape is manually rendered into a set of 2D images).
In this way, the large 2D-3D domain discrepancy is unnecessarily reduced to
the 2D-2D one. Second, the data themselves from both modalities should be
realistic, mimicking the real-world scenario. More specifically, we need a
full variety of sketches per category as real users possess various drawing
skills. As for 3D shapes, we need to frame 3D models with real-world
settings more than create them artificially. However, human sketches on
existing datasets tend to be semi-photorealistic drawn by experts and the
number of sketches per category is quite limited; in the meantime, most
current 3D datasets used in SBSR are composed of CAD models, losing certain
details compared to the models scanned from real objects.



To circumvent the above limitations, this track proposes a more realistic
and challenging setting for SBSR. On the one hand, we adopt highly abstract
2D sketches drawn by amateurs, and at the same time, bypass the
projection-based representations for 3D shapes by directly adopting and
representing 3D point cloud data. On the other hand, we adopt a full
variety of free-hand sketches with various samples per category, as well as
a collection of realistic point cloud data framed from indoor objects.
Therefore, we name this track ‘sketch-based 3D shape retrieval in the wild’
(SBSRW). As stated above, the term ‘in the wild’ is reflected in two
perspectives: 1) The domain gap between the two modalities is realistic as
we adopt sketches of high abstraction levels and 3D point cloud data. 2)
The data themselves mimic the real-world setting as we adopt a full variety
of sketches (3,000 per category) and 3D point clouds captured from real
objects.



======================= Tasks ===========================

We proposed two tasks to evaluate the performance of different SBSR
algorithms,* i.e.*, sketch-based 3D CAD model (point cloud data) retrieval
and sketch-based realistic scanned model (point cloud data) retrieval.



For *the first task*, we select around 2,500 3D CAD models from 47 classes
on ModelNet40/ShapeNet and 3,000 sketches from each corresponding category
(141,000 sketch samples in total) on QuickDraw. We randomly select 2,500
sketches from each class for training, and the remaining 500 sketches per
class are used for testing/query. All the 3D point clouds as a whole are
utilized as the target/gallery dataset to evaluate the retrieval
performance. Participants are asked to submit the results on the test
datasets.



For *the second task*, we select 2,000 realistic 3D models from 11 classes
on ScanObjectNN and 3,000 sketches per class (33,000 sketch samples in
total) from QuickDraw. Similar to the first task, we randomly select 2,500
sketches from each class for training, and the remaining 500 sketches per
class are used for testing/query. All the 3D point clouds as a whole are
utilized as the target/gallery dataset to evaluate the retrieval
performance. Participants are asked to submit the results on the test
datasets.



======================= Evaluation Method ===========================

For a comprehensive evaluation of different algorithms, we employ the
following widely-adopted performance metrics in SBSR, including nearest
neighbor (NN), first tier (FT), second tier (ST), E-measure (E), discounted
cumulated gain (DCG), mean average precision (mAP), and precision-recall
(PR) curve. We will provide the source code to compute all the
aforementioned metrics.



======================= Procedure ===========================

The following list is a step-by-step description of the activities:

   - The participants register the track by sending an email to
*qinjiebuaa at gmail.com
   <qinjiebuaa at gmail.com>* with 'SHREC 2022 - SBSRW Track Registration' as
   the title and indicating which task they are interested in.
   - The organizers release the dataset via their website.
   - The participants submit the distance matrices for the test sets, with
   one-page descriptions of their methods.
   - Evaluation is automatically performed based on the submitted matrices,
   by computing all the performance metrics via the official source code.
   - The organizers announce the results and the final rank list of all the
   participants.
   - The track results are combined into a joint paper, which is subject to
   a two-stage peer review process. Accepted papers will be published in
   Computers & Graphics.
   - The description of the track and the results will be presented at
   Eurographics 2022 Symposium on 3D Object Retrieval (1-2 September 2022).



======================= Schedule ===========================

   - January 1: Call for participation.
   - January 15: Release a few sample sketches and 3D models.
   - January 22: Registration deadline.
   - January 29: Release the training set for the first task.
   - February 5: Release the training set for the second task.
   - February 28: Submission deadline for the first task.
   - March 4: Submission deadline for the second task.
   - March 8: Release the final results for both tasks; jointly write the
   track report.
   - March 15: Submission deadline for the joint paper for C&G review.



*******************************



We look forward to your participation!



Best Regards,



Jie Qin



Professor

College of Computer Science and Technology

Nanjing University of Aeronautics and Astronautics (NUAA)

Nanjing, Jiangsu 211106, China
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20220109/8c7b719f/attachment.html>


More information about the visionlist mailing list