<div dir="ltr"><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:12pt;margin-left:0in;text-align:justify"><b><span style="font-size:10pt;font-family:Arial,sans-serif">IEEE VR 2023 Workshop on Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality (MASSXR)</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><b><span style="font-size:10pt;font-family:Arial,sans-serif">Location and date</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><span style="font-size:10pt;font-family:Arial,sans-serif">The workshop IEEE-MASSXR will take place during the 30th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2023) conference, which will be held from March 25-29, 2023, in Shanghai, China.</span><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><span style="font-size:10pt;font-family:Arial,sans-serif">IEEE-MASSXR is a half-day workshop, and it will be held online on 25th March. For more information, please visit the workshop’s website:</span><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><span style="font-size:10pt;font-family:Arial,sans-serif"><a href="https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Fmassxrworkshop2023&data=05%7C01%7Coya.celiktutan%40kcl.ac.uk%7C4e3be836bcff4dd7752208dae1a5c428%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C638070399325214061%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dqHmgAmzyL2QSUG8PQgtK3kmQ0dO7U8H89oaRICOCok%3D&reserved=0" title="Original URL:
https://sites.google.com/view/massxrworkshop2023

Click to follow link." style="color:rgb(149,79,114)"><span style="color:rgb(17,85,204)">https://sites.google.com/view/massxrworkshop2023</span></a> </span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><b><span style="font-size:10pt;font-family:Arial,sans-serif">Description</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><span style="font-size:10pt;font-family:Arial,sans-serif">With the recent advances in immersive technologies such as realistic digital humans, off-the-shelf XR devices with capabilities to capture users’ speech, faces, hands, and bodies, and the development of sophisticated data-driven AI algorithms, there is a great potential for automatic analysis and synthesis of social and affective cues in XR.  Although affective and social signal understanding and synthesis are studied in other fields (e.g., for human-robot interaction, intelligent virtual agents, or computer vision), it has not yet been explored adequately in Virtual and Augmented Reality. This demands extended-reality-specific theoretical and methodological foundations. Particularly, this workshop focuses on the following research questions:</span><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><ul type="disc" style="margin-bottom:0in;color:rgb(0,0,0);font-family:Calibri;margin-top:0in"><li class="MsoNormal" style="margin:11pt 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">How can we sense the user’s affective and social states using sensors available in XR?</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">How can we collect users’ interaction data in immersive situations?</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">How can we generate affective and social cues for digital humans/avatars in immersive interactions enabled by dialogue, voice, and non-verbal behaviors?</span></li><li class="MsoNormal" style="margin:0in 0in 11pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">How can we develop systematic methodologies and techniques to develop  plausible, trustable, personalized behaviors for social and affective interaction in XR?</span></li></ul><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:12pt;margin-left:0in;text-align:justify"><span style="font-size:10pt;font-family:Arial,sans-serif">The objective of this workshop on<span class="gmail-Apple-converted-space"> </span></span><b><span style="font-size:10pt;font-family:Arial,sans-serif">Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"><span class="gmail-Apple-converted-space"> </span>is to bring together researchers and practitioners working in the field of social and affective computing with the ones on 3D computer vision and computer graphics/animation and discuss the current state and future directions, opportunities, and challenges. The workshop aims to establish a new platform for the development of immersive embodied intelligence at the intersection of Artificial intelligence (AI) and Extended Reality (XR). We expect that the workshop will provide an opportunity for researchers to develop new techniques and will lead to new collaboration among the participants. </span><span style="font-size:10pt"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><b><span style="font-size:10pt;font-family:Arial,sans-serif">Scope</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:12pt;margin-left:0in;text-align:justify"><span style="font-size:10pt;font-family:Arial,sans-serif">This workshop invites researchers to submit original, high-quality research papers related to multi-modal affective and social behavior analysis and synthesis in XR. Relevant topics include, but are not limited to:</span><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><ul type="disc" style="margin-bottom:0in;color:rgb(0,0,0);font-family:Calibri;margin-top:0in"><li class="MsoNormal" style="margin:11pt 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Analysis and synthesis of multi-modal social and affective cues in XR</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Data-driven expressive character animation (e.g., face, gaze, gestures, ...)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">AI algorithms for modeling social interactions with human- and AI-driven virtual humans</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Machine learning for dyadic and multi-party interactions</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Generating diverse, personalized, and style-based body motions</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Music-driven animation (e.g., dance, instrument playing)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Multi-modal data collection and annotation in and for XR (e.g., using VR/AR headsets, microphones, motion capture devices, and 4D scanners)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Efficient and novel machine learning methods (e.g., transfer learning, self-supervised and few-shot learning, generative and graph models)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Subjective and objective analysis of data-driven algorithms for XR</span></li><li class="MsoNormal" style="margin:0in 0in 12pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Applications in healthcare, education, and entertainment (e.g., sign language)</span></li></ul><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><b><span style="font-size:10pt;font-family:Arial,sans-serif">Important Dates</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><ul type="disc" style="margin-bottom:0in;color:rgb(0,0,0);font-family:Calibri;margin-top:0in"><li class="MsoNormal" style="margin:11pt 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Submission deadline: January 9, 2023 (Anywhere on Earth)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Notifications: January 20, 2023</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Camera-ready deadline: January 27, 2023</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Conference date: March 25-29, 2023</span></li><li class="MsoNormal" style="margin:0in 0in 11pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Workshop date: March 25, 2023 (Shanghai time TBD)</span></li></ul><p style="color:rgb(0,0,0);font-family:Calibri;margin-right:0in;margin-bottom:11pt;margin-left:0in;text-align:justify"><b><span style="font-size:10pt;font-family:Arial,sans-serif">Instructions for Submission</span></b><b><span style="font-size:10pt">s</span></b><span style="font-size:10pt;font-family:Arial,sans-serif"></span></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Authors are invited to submit a</span>:</p><ul type="disc" style="margin-bottom:0in;color:rgb(0,0,0);font-family:Calibri;margin-top:0in"><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Research paper: 4-6 pages + references</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Work-in-progress paper: 2-3 pages + references</span></li></ul><p class="MsoNormal" style="margin:0in 0in 11pt;font-size:10pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);text-align:justify"><b><span style="font-family:Arial,sans-serif">Organizers:</span></b><span style="font-family:Arial,sans-serif"></span></p><ul type="disc" style="margin-bottom:0in;color:rgb(0,0,0);font-family:Calibri;margin-top:0in"><li class="MsoNormal" style="margin:11pt 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Zerrin Yumak (Utrecht University, The Netherlands)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Funda Durupinar (University of Massachusetts Boston, USA)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Oya Celiktutan (King’s College London, UK)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Pablo Cesar (CWI and TU Delft, The Netherlands)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Aniket Bera (Purdue University, USA)</span></li><li class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:10pt;font-family:Calibri,sans-serif;text-align:justify;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Mar Gonzalez-Franco (Google Labs, USA)</span></li></ul></div>