[visionlist] [Meetings] CfPs IEEE VR 2023 Workshop MASSXR: Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality (Submission deadline: January 9)
Oya Celiktutan
oya.celiktutan at gmail.com
Mon Dec 19 10:52:35 -04 2022
*IEEE VR 2023 Workshop on Multi-modal Affective and Social Behavior
Analysis and Synthesis in Extended Reality (MASSXR)*
*Location and date*
The workshop IEEE-MASSXR will take place during the 30th IEEE Conference on
Virtual Reality and 3D User Interfaces (IEEE VR 2023) conference, which
will be held from March 25-29, 2023, in Shanghai, China.
IEEE-MASSXR is a half-day workshop, and it will be held online on 25th
March. For more information, please visit the workshop’s website:
https://sites.google.com/view/massxrworkshop2023
<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Fmassxrworkshop2023&data=05%7C01%7Coya.celiktutan%40kcl.ac.uk%7C4e3be836bcff4dd7752208dae1a5c428%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C638070399325214061%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dqHmgAmzyL2QSUG8PQgtK3kmQ0dO7U8H89oaRICOCok%3D&reserved=0>
*Description*
With the recent advances in immersive technologies such as realistic
digital humans, off-the-shelf XR devices with capabilities to capture
users’ speech, faces, hands, and bodies, and the development of
sophisticated data-driven AI algorithms, there is a great potential for
automatic analysis and synthesis of social and affective cues in XR.
Although affective and social signal understanding and synthesis are
studied in other fields (e.g., for human-robot interaction, intelligent
virtual agents, or computer vision), it has not yet been explored
adequately in Virtual and Augmented Reality. This demands
extended-reality-specific theoretical and methodological foundations.
Particularly, this workshop focuses on the following research questions:
- How can we sense the user’s affective and social states using sensors
available in XR?
- How can we collect users’ interaction data in immersive situations?
- How can we generate affective and social cues for digital
humans/avatars in immersive interactions enabled by dialogue, voice, and
non-verbal behaviors?
- How can we develop systematic methodologies and techniques to develop
plausible, trustable, personalized behaviors for social and affective
interaction in XR?
The objective of this workshop on *Multi-modal Affective and Social
Behavior Analysis and Synthesis in Extended Reality* is to bring together
researchers and practitioners working in the field of social and affective
computing with the ones on 3D computer vision and computer
graphics/animation and discuss the current state and future directions,
opportunities, and challenges. The workshop aims to establish a new
platform for the development of immersive embodied intelligence at the
intersection of Artificial intelligence (AI) and Extended Reality (XR). We
expect that the workshop will provide an opportunity for researchers to
develop new techniques and will lead to new collaboration among the
participants.
*Scope*
This workshop invites researchers to submit original, high-quality research
papers related to multi-modal affective and social behavior analysis and
synthesis in XR. Relevant topics include, but are not limited to:
- Analysis and synthesis of multi-modal social and affective cues in XR
- Data-driven expressive character animation (e.g., face, gaze,
gestures, ...)
- AI algorithms for modeling social interactions with human- and
AI-driven virtual humans
- Machine learning for dyadic and multi-party interactions
- Generating diverse, personalized, and style-based body motions
- Music-driven animation (e.g., dance, instrument playing)
- Multi-modal data collection and annotation in and for XR (e.g., using
VR/AR headsets, microphones, motion capture devices, and 4D scanners)
- Efficient and novel machine learning methods (e.g., transfer learning,
self-supervised and few-shot learning, generative and graph models)
- Subjective and objective analysis of data-driven algorithms for XR
- Applications in healthcare, education, and entertainment (e.g., sign
language)
*Important Dates*
- Submission deadline: January 9, 2023 (Anywhere on Earth)
- Notifications: January 20, 2023
- Camera-ready deadline: January 27, 2023
- Conference date: March 25-29, 2023
- Workshop date: March 25, 2023 (Shanghai time TBD)
*Instructions for Submission**s*
Authors are invited to submit a:
- Research paper: 4-6 pages + references
- Work-in-progress paper: 2-3 pages + references
*Organizers:*
- Zerrin Yumak (Utrecht University, The Netherlands)
- Funda Durupinar (University of Massachusetts Boston, USA)
- Oya Celiktutan (King’s College London, UK)
- Pablo Cesar (CWI and TU Delft, The Netherlands)
- Aniket Bera (Purdue University, USA)
- Mar Gonzalez-Franco (Google Labs, USA)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20221219/a422088a/attachment-0001.html>
More information about the visionlist
mailing list