[visionlist] [Deadline March 7th] CVPRW'19 on fashion and subjective search

Xavier Alameda-Pineda xavier.alameda-pineda at inria.fr
Thu Feb 14 11:54:27 -04 2019


Key information 

    * March 7th — Workshop paper submission deadline 
    * April 3rd — Author notification 
    * April 10th — Camera-ready 

We use the same formatting template as CVPR’2019 and we seek for two kinds of submissions (through [ https://cmt3.research.microsoft.com/FFSSUSAD2019 | https://cmt3.research.microsoft.com/FFSSUSAD2019 ] ): 

    * Full papers of new contributions (8 pages NOT including references) 
    * Short papers describing incremental/preliminary work (2 pages NOT including references) 

Scope 

The workshop we propose for CVPR 2019 has a specific Focus on Fashion and Subjective Search (hence the name FFSS -USAD). Indeed, fashion [1,2] is influenced by subjective perceptions as well as societal trends, thus encompassing many of the subjective attributes (both individual and collective) mentioned in the USAD project page ( [ http://project.inria.fr/usad | http://project.inria.fr/usad ] ). Fashion is therefore a very relevant application for research on subjective understanding of data, and at the same time has great economic and societal impact. Moreover one the of hardest associated tasks is how to perform retrieval (and thus search) of visual content based on subjective attributes of data [3-5]. 

The automatic analysis and understanding of fashion in computer vision has growing interest, with direct applications on marketing, advertisement, but also as a social phenomena and in relation with social media and trending. Exemplar tasks are, for instance, the creation of capsules wardrobes [6]. More fundamental studies address the design of unsupervised techniques to learn a visual embedding that is guided by the fashion style [7]. The task of fashion artifact/landmark localization has also been addressed [8], jointly with the creation of a large-scale dataset. Another research line consists on learning visual representations for visual fashion search [9]. The effect of social media tags on the training of deep architecture for image search and retrieval has also been investigated [10]. 

We seek for contributions on the following points: 

    * Collecting large-scale datasets annotated with fashion (subjective) criteria. 
    * Learning visual representations specifically tailored for fashion and exploitable for subjective search. 
    * Reliably evaluating the accuracy of detectors/classifiers of subjective properties. 
    * Translating (social) psychology theories into computational approaches to understand the perception of fashion, and its social dimension. 

References 

    1. Compare and Contrast: Learning Prominent Visual Differences. S. Chen and K. Grauman. In CVPR 2018. 
    2. Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images. A. Yu and K. Grauman. In ICCV 2017. 
    3. Deep image retrieval: Learning global representations for image search. A. Gordo, J. Almazán, J. Revaud, D. Larlus. In ECCV 2016. 
    4. End-to-end learning of deep visual representations for image retrieval. A. Gordo, J. Almazan, J. Revaud, D. Larlus. IJCV 2017. 
    5. Beyond instance-level image retrieval: Leveraging captions to learn a global visual representation for semantic retrieval. A. Gordo, D. Larlus. In CVPR 2017. 
    6. Creating capsule wardrobes from fashion images. W.-L. Hsiao, K. Grauman. In CVPR 2018. 
    7. Learning the latent “look”: unsupervised discovery of a style-coherent embedding from fashion images. W.-L. Hsiao, K. Grauman. In ICCV 2017. 
    8. Runway to realway: Visual analysis of fashion. Vittayakorn, S., Yamaguchi, K., Berg, A. C., & Berg, T. L. In WACV 2015. 
    9. Learning Attribute Representations with Localization for Flexible Fashion Search.Ak, K. E., Kassim, A. A., Lim, J. H., & Tham, J. Y. In CVPR 2018. 
    10. Weakly supervised deep metric learning for community-contributed image retrieval. Li, Z., & Tang, J. IEEE TMM 2015. 

Xavi, Miriam, Diane, Kristen, Nicu and Shih-Fu 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20190214/f1bed214/attachment.html>


More information about the visionlist mailing list