[visionlist] CFP: Workshop on Active Vision and perception in Human(-Robot) Collaboration

Foulsham, Tom foulsham at essex.ac.uk
Thu Apr 16 08:12:48 -04 2020

1st Call for Papers

AVHRC 2020 - Active Vision and perception in Human(-Robot)
Collaboration Workshop


Key Dates
Submission opening: May 1, 2020
Submission deadline: June 25, 2020
Notification: July  15, 2020
Camera ready July 30, 2020
Workshop: August 31, 2020

**Workshop website: ***
Under construction.

**Submission website: ***
Not available yet 

All accepted papers will be published on the workshop website.
Selected papers will be published in a dedicated special issue of
a high quality open access journal, e.g. Frontiers in Neurorobotics.
A best paper award will be announced, offering a full publication
fee waiver.
Submission Guidelines
Two types of submissions are invited to the workshop: long papers
(6 to 8 pages + n references pages) and short papers (2-4 pages + n
pages). In both cases there is no page limit for the
bibliography/references (n
pages) section.    
All submissions should be formatted according to the standard IEEE
RAS Formatting Instructions and Templates avialble at
http://ras.papercept.net/conferences/support/tex.php. Authors are required
to submit their papers electronically in PDF format.
At least one author of each accepted paper must register for the
For any questions regarding paper submission, please email us:
dimitri.ognibene at gmail.com
Papers will be presented in short talks and/or poster spotlights.
The organisers would like to reassure authors that, independently
of any potential restriction due to the COVID-19 situation, it will be
to present all accepted papers and to attend the keynotes, either in
person or
remotely, following the same rules and the same procedure of the main
conference. At what is a difficult time for many people, we look forward to
sharing our work with the community despite any restrictions and we invite
interested colleagues to join us. More information can be found here
€ Active perception for intention and action prediction
€ Activity and action recognition in the wild
€ Active perception for social interaction
€ Active perception for (collaborative) navigation
€ Human-robot collaboration in unstructured environments
€ Human-robot collaboration in presence of sensory limits
€ Joint human-robot search and exploration
€ Testing setup for social perception in real or virtual environments
€ Setup for transferring active perception skills from humans to robots
€ Machine learning methods for active social perception
€ Benchmarking and quantitative evaluation with human subject experiments
€ Gaze-based factors for intuitive human-robot collaboration
€ Active perception modelling for social interaction and collaboration
€ Head-mounted eye tracking and gaze estimation during social interaction
€ Estimation and guidance of partner situation awareness and attentional
state in
human-robot collaboration
€ Multimodal social perception
€ Adaptive social perception
€ Egocentric vision in social interaction
€ Explicit and implicit sensorimotor communication
€ Social attention
€ Natural human-robot (machine) interaction
€ Collaborative exploration
€ Joint attention
€ Multimodal social attention
€ Attentive activity recognition
€ Belief and mental state attribution in robots

Invited Speakers

* Giulio Sandini, Italian Institute of Technology, Italy
* Fiora Pirri, Università di Roma ³La Sapienza², Italy
* Tom Foulsham, University of Essex, UK
* Angelo Cangelosi, University of Manchester, UK
* David Rudrauf, University of Geneve, Switzerland
* Giuseppe Boccignone, Università di Milano, Italy

Humans naturally interact and collaborate in unstructured social
environments, which produce an overwhelming amount of information and may
hide behaviorally relevant variables. Finding the underlying design
that allow humans to adaptively find and select relevant information is
important for Robotics but also other fields, such as Cognitive Sciences,
Computational Neuroscience, Interaction Design, and Computer Vision.

Current solutions address specific domains, e.g. autonomous cars,
and usually employ over-redundant, expensive, and computationally demanding
sensory systems that attempt to cover the wide set of environmental
which the systems may have to deal with. Adaptive control of the sensors
and of
the perception process is a key solution found by nature to cope with such
problems, as shown by the foveal anatomy of the eye and its high mobility.

Alongside this interest in ³active² vision, collaborative robotics
has recently progressed to human-robot interaction in real manufacturing
processes. Measuring and modelling task-specific gaze behaviours seems to
essential for smooth human-robot interaction. Indeed, anticipatory control
human-in-the-loop architectures, which can enable robots to proactively
collaborate with humans, relies heavily on observing the gaze and actions
patterns of the human partner.

We would like to solicit manuscripts that present novel
computational and robotic models, theories and experimental results as
well as
reviews relevant to these topics. Submissions will further our
understanding of
how humans actively control their perception during social interaction and
in which
conditions they fail, and how these insights may enable natural interaction
between humans and artificial systems in non-trivial conditions.

Main organiser
Dimitri Ognibene, University
of Essex, UK & University of Milano-Bicocca, Italy

Communication Organisers
 Francesco Rea, Instituto Italiano di Tecnologia, Italy
 Francesca Bianco,University of Essex, UK
 Vito Trianni, ISTC-CNR, Italy
 Ayse Kucukyilmaz, University of Nottingham, UK

Review Organisers
Angela Faragasso,  The University of Tokyo, Japan
Manuela Chessa, University of Genova
Fabio Solari, University of Genova
David Rudrauf,  University of Geneve, Switzerland
Yan Wu,  Robotics Department, Institute for Infocomm Research, A*STAR,

Publication Organisers
Fiora Pirri, Sapienza - University of Rome, Italy
Letizia Marchegiani, Aalborg University, Denmark
Tom Foulsham, University of Essex, UK
Giovanni Maria Farinella, University of Catania, Italy

More information about the visionlist mailing list