[visionlist] Postdoctoral Research Associate Positions in Data-Driven Character Animation

Taku Komura tkomura at ed.ac.uk
Mon Oct 1 12:58:46 -05 2018


The School of Informatics, University of Edinburgh invites applications for
a Research Associate position to work on a project with Facebook Reality
Labs. Two positions are available: one about animating hand-object
interactions and another about animating human-human interactions. This is
100% funded research position for three years with annual revision. The
successful candidate will be supervised by Dr Taku Komura (University of
Edinburgh) and Facebook Reality Labs.

The objectives of the project are to investigate how to let neural networks
learn representations that can be used for human-object grasping and
human-human interactions for virtual reality and computer graphics
applications.

This position is full-time, available from 1s t November 2018. The project
duration is 36 months. The contract will be updated every 12 months.

To apply please include:

   -

   ●  a brief statement of research interests (describing how past
   experience and future

   plans fit with the advertised position)
   -

   ●  complete CV, including list of publications
   -

   ●  the names and email addresses of two references

   The closing date is 15th October 2018.
   Salary scale: UE07: £32,548 - £38,833pa
   Informal enquires can be made to Taku Komura tkomura at ed.ac.uk

Application Procedure

All applicants should apply online by accessing the link below and clicking
the button at the bottom of the website.

https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=
045477

The application process is quick and easy to follow, and you will receive
email confirmation of safe receipt of your application. The online system
allows you to submit a CV and other attachments.

The closing date is 5pm on 29th October.

   1.

   Project information

   Animating Hand-Object Interactions

   Research Purpose:
   Automatically generating animation of characters to grasp and manipulate
   3D objects, given its geometry, label and action required by the character.

   The research result will enable two types of applications in VR:
   1.

      The animation model can be used as a strong prior for tracking
      virtual humans

      interacting with virtual or physical objects. Given low dimensional
      sensor input or tracking artifacts, the animation model can be
used to fill
      in missing details and produce high fidelity output.
      2.

      The motion synthesis technique can be used to animate virtual
      assistants or companions that interact autonomously with human
users in VR
      in social, productivity, or entertainment activities.

   Research Description:
   We will use a data-driven approach for achieving this task. More
   specifically, we will capture the motion of a person grasping and
   manipulating everyday objects, including mug cups, vases, bags, chairs,
   tables, boxes, etc. The motion of the objects will also be tracked. An

[image: page1image2245582160][image: page1image2245582432][image:
page1image2245582704]

optical motion capture system will be used for this purpose. The collected
data will be used to generate synthetic depth images, which correspond to
what the character is seeing during the action. An end-to-end deep neural
network scheme will be established where the motion of the character at
every frame is computed based on the state of the character and the object
as captured from motion sensors or image sensors.

The main research tasks will include (a) mapping low dimension or low
quality input to high fidelity input, for both hands and the body, with
object manipulation motions, (b) predict hand manipulation motions from
initial states, including cluttered environment, novel objects, and both
hands, (c) extend the second one to full body, and also explore strategic
manipulations.

Animating Human-Human interaction

Research Purpose: Developing a statistical human model that can talk to one
another, or having physical interactions such as shaking hands, dancing,
playing sports, etc.

Research Description:
The purpose of this project is to develop a responsive character model that
can interact with each other, or with a real human in real-time. Such a
model will be useful for controlling a virtual character to interact with
users wearing VR headsets, or animating the interactions between two
virtual characters.

We will target two types of interactions: (a) two persons talking to each
other, and (b) two persons conducting physical interactions, such as
shaking hands, dancing and playing sports.

We will adopt a data-driven approach for achieving this task. The motions
of two persons interacting with each other will be captured using an
optical motion capture system equipped in the School of Informatics,
University of Edinburgh. For the conversation task, we will also record the
voices of the subjects. These data will be used to train a statistical
model based on deep neural networks. In order to increase the amount of
data, we will also do a data augmentation task, where the motions will be
edited while the constraints and spatial relations are preserved.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20181001/348fbf31/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: not available
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20181001/348fbf31/attachment.ksh>


More information about the visionlist mailing list