[visionlist] Call for papers - 2nd Workshop on Generation of Human Face and Body Behavior (GHB 2025)
Stefano Berretti
stefano.berretti at unifi.it
Mon Jun 9 08:41:33 -05 2025
*2nd Workshop on Generation of Human Face and Body Behavior*, *GHB 2025*
*September 16th, 2025*
*https://sites.google.com/unifi.it/ghb2025/
<https://sites.google.com/unifi.it/ghb2025/>*
*In conjunction with ICIAP 2025, Sept. 15-19, 2025*
*Rome, Italy*
*CALL FOR PAPERS*
Human behavior, both for face and body has been studied in detail (for
example for expression / action classification and prediction), but there
have been few works exploring generation of novel behaviors. Generating
novel sequences of human facial expression, talking heads, or body to form
a natural and plausible action with continuous and smooth temporal dynamics
is a challenging problem. These motions can either simulate full body
movement, like for gait, or part specific movement, like in playing the
guitar or phone call, or involve facial expressions, action units or even
mouth movement when a person speaks or reads a text. With the advent of
powerful generative models such as Generative Adversarial Networks (GAN),
or Diffusion models, novel data generation paradigms have become possible,
and these networks have shown to be powerful in many image generation
tasks. However, many issues remain to be solved especially when passing
from the static to the dynamic case and new research problems emerge. For
example, what are the appropriate architectures and loss functions to
generate dynamic facial expressions, actions, in 2D or in 3D? Which are the
best objective and subjective methods to evaluate the generative models? Is
it possible to generate long dynamic sequences with natural transitions
between different expressions or actions?
We expect generating synthetic and realistic static and dynamic data of
humans can have a big impact in several different contexts. A
straightforward outcome that developing such techniques could have, is that
of generating an abundance and variety of new data that could be otherwise
difficult, very expensive and time consuming to obtain from reality. Such
data can be essential in simulation, virtual and augmented reality, in
training more robust learning tools, to cite a few. For example, we could
expect new applications in the game and movie industry, where fully
synthetic actors could be used in the near future, without the need of
explicit modeling. We expect this workshop could help to make a step in
these research directions, also focusing on new evaluation methodologies
that could make quantitative rather than qualitative the assessment and
comparison of generated data. With respect to this latter aspect, we expect
the public release of new benchmarks, especially in 3D could help to
improve the research in this field. Finally, there is quite debate in
society and in the scientific community too about ethical implications
related to generating synthetic and realistic humans’ data. We also aim to
have discussion on these social and ethical implications at the workshop.
The goal of this workshop is to provide contributions to this emerging
field of study.
*TOPICS*
The goal of this workshop is to provide contributions to this emerging
field of study. Topics of interest of this workshop include but are not
limited to:
- 2D and 3D static and dynamic content generation
- 2D and 3D facial expression generation
- Talking heads generation
- Actions generation
- Behavior generation
- Human-human interaction generation
- Human-object interaction generation
- Evaluation of generative models
- New benchmarks
- Reduce ethical issues in human data generation
- Applications (data augmentation, simulation, VA, medical, robotics)
*Important dates*
Papers submission deadline: *June 20th, 2025 *
https://easychair.org/conferences?conf=ghb2025
Authors notification: *July 7th, 2025*
Camera ready deadline: *July 10th, 2025*
Workshop date and venue: *September 16th, 2025, Rome, Italy*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20250609/5c66ab68/attachment.html>
More information about the visionlist
mailing list