[visionlist] Updated CFP: Workshop on Active Vision and perception in Human(-Robot) Collaboration

Foulsham, Tom foulsham at essex.ac.uk
Tue Jul 14 10:31:11 -04 2020


Please see below details for workshop and linked special issue on active human and robot vision…

Extended Deadline and Final Call for Papers

AVHRC 2020 - Active Vision and perception in Human(-Robot)
Collaboration Workshop

@RO-MAN 2020 - THE 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND
HUMAN INTERACTIVE COMMUNICATION
NAPLES, ITALY, FROM AUGUST 31 TO SEPTEMBER 4, 2020.


 ** NEWS**

The transformation of the workshop into a virtual event guarantees greater flexibility. Therefore, the deadline for contributions to AVHRC 2020 - Collaborative workshop on active vision and perception in humans (-Robot) - The virtual conference has been extended to July 17 to allow more refined contributions.

 ** PREVIOUS NEWS**

Due to the international COVID-19 crisis, the AVHRC 2020  workshop and the main conference RO-MAN2020 will be completely virtual events. The exact schedule is not decided yet but the changes from the original schedule will be limited. More information will be available soon at http://ro-man2020.unina.it/.


Key Dates
=========

Submission opening: May 1, 2020

Submission deadline: July 17, 2020 June 25, 2020

Notification: August 10, 2020 July  15, 2020

Camera ready: August 20, 2020 July 30, 2020

Workshop: August 31, 2020


**Workshop website: ***
https://www.essex.ac.uk/departments/computer-science-and-electronic-engineering/events/avhrc-2020

**Submission website: ***
https://easychair.org/conferences/?conf=avhrc2020

Publication
============

All accepted papers will be published on the workshop website.

Selected papers will be published with a discounted fee in a dedicated topic of Frontiers in Neurorobotics: https://www.frontiersin.org/research-topics/13958/active-vision-and-perception-in-human-robot-collaboration

A best paper award will be announced, offering a full publication fee waiver.


Submission Guidelines
=====================
Two types of submissions are invited to the workshop: long papers
(6 to 8 pages + n references pages) and short papers (2-4 pages + n
references
pages). In both cases there is no page limit for the
bibliography/references (n
pages) section.
All submissions should be formatted according to the standard IEEE
RAS Formatting Instructions and Templates avialble at
http://ras.papercept.net/conferences/support/tex.php. Authors are required
to submit their papers electronically in PDF format.
At least one author of each accepted paper must register for the
workshop.
For any questions regarding paper submission, please email us:
dimitri.ognibene at gmail.com<mailto:dimitri.ognibene at gmail.com>

Presentation
==============
Papers will be presented in short talks and/or poster spotlights.
The organisers would like to reassure authors that, independently
of any potential restriction due to the COVID-19 situation, it will be
possible
to present all accepted papers and to attend the keynotes, either in
person or
remotely, following the same rules and the same procedure of the main
conference. At what is a difficult time for many people, we look forward to
sharing our work with the community despite any restrictions and we invite
interested colleagues to join us. More information can be found here
<http://ro-man2020.unina.it/announcements.php>:
http://ro-man2020.unina.it/announcements.php

Topics
========
€ Active perception for intention and action prediction
€ Activity and action recognition in the wild
€ Active perception for social interaction
€ Active perception for (collaborative) navigation
€ Human-robot collaboration in unstructured environments
€ Human-robot collaboration in presence of sensory limits
€ Joint human-robot search and exploration
€ Testing setup for social perception in real or virtual environments
€ Setup for transferring active perception skills from humans to robots
€ Machine learning methods for active social perception
€ Benchmarking and quantitative evaluation with human subject experiments
€ Gaze-based factors for intuitive human-robot collaboration
€ Active perception modelling for social interaction and collaboration
€ Head-mounted eye tracking and gaze estimation during social interaction
€ Estimation and guidance of partner situation awareness and attentional
state in
human-robot collaboration
€ Multimodal social perception
€ Adaptive social perception
€ Egocentric vision in social interaction
€ Explicit and implicit sensorimotor communication
€ Social attention
€ Natural human-robot (machine) interaction
€ Collaborative exploration
€ Joint attention
€ Multimodal social attention
€ Attentive activity recognition
€ Belief and mental state attribution in robots

Invited Speakers
================

* Giulio Sandini, Italian Institute of Technology, Italy
* Fiora Pirri, Università di Roma ³La Sapienza², Italy
* Tom Foulsham, University of Essex, UK
* Angelo Cangelosi, University of Manchester, UK
* David Rudrauf, University of Geneve, Switzerland
* Giuseppe Boccignone, Università di Milano, Italy

Background
=============
Humans naturally interact and collaborate in unstructured social
environments, which produce an overwhelming amount of information and may
yet
hide behaviorally relevant variables. Finding the underlying design
principles
that allow humans to adaptively find and select relevant information is
important for Robotics but also other fields, such as Cognitive Sciences,
Computational Neuroscience, Interaction Design, and Computer Vision.

Current solutions address specific domains, e.g. autonomous cars,
and usually employ over-redundant, expensive, and computationally demanding
sensory systems that attempt to cover the wide set of environmental
conditions
which the systems may have to deal with. Adaptive control of the sensors
and of
the perception process is a key solution found by nature to cope with such
problems, as shown by the foveal anatomy of the eye and its high mobility.

Alongside this interest in ³active² vision, collaborative robotics
has recently progressed to human-robot interaction in real manufacturing
processes. Measuring and modelling task-specific gaze behaviours seems to
be
essential for smooth human-robot interaction. Indeed, anticipatory control
for
human-in-the-loop architectures, which can enable robots to proactively
collaborate with humans, relies heavily on observing the gaze and actions
patterns of the human partner.

We would like to solicit manuscripts that present novel
computational and robotic models, theories and experimental results as
well as
reviews relevant to these topics. Submissions will further our
understanding of
how humans actively control their perception during social interaction and
in which
conditions they fail, and how these insights may enable natural interaction
between humans and artificial systems in non-trivial conditions.

Organizers
==================
Main organiser
Dimitri Ognibene, University
of Essex, UK & University of Milano-Bicocca, Italy

Communication Organisers
Francesco Rea, Instituto Italiano di Tecnologia, Italy
Francesca Bianco,University of Essex, UK
Vito Trianni, ISTC-CNR, Italy
Ayse Kucukyilmaz, University of Nottingham, UK

Review Organisers
Angela Faragasso,  The University of Tokyo, Japan
Manuela Chessa, University of Genova
Fabio Solari, University of Genova
David Rudrauf,  University of Geneve, Switzerland
Yan Wu,  Robotics Department, Institute for Infocomm Research, A*STAR,
Singapore

Publication Organisers
Fiora Pirri, Sapienza - University of Rome, Italy
Letizia Marchegiani, Aalborg University, Denmark
Tom Foulsham, University of Essex, UK
Giovanni Maria Farinella, University of Catania, Italy


   Sponsor

=============================


Frontiers in Neurorobotics

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200714/cc6b119a/attachment-0001.html>


More information about the visionlist mailing list