<div dir="ltr"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-page" title="Page 1" style="color:rgb(0,0,0);font-family:-webkit-standard"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-section"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-layoutArea"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-column"><p><span style="font-size:11pt;font-family:ArialMT">The School of Informatics, University of Edinburgh invites applications for a Research Associate position to work on a project with Facebook Reality Labs. </span><span style="font-size:11pt;font-family:Gautami"></span><span style="font-size:11pt;font-family:Arial;font-weight:700">Two positions </span><span style="font-size:11pt;font-family:Gautami"></span><span style="font-size:11pt;font-family:ArialMT">are available: one about </span><span style="font-size:11pt;font-family:Gautami"></span><span style="font-size:11pt;font-family:Arial;font-weight:700">animating hand-object interactions</span><span style="font-size:11pt;font-family:Gautami"> </span><span style="font-size:11pt;font-family:ArialMT">and another about </span><span style="font-size:11pt;font-family:Gautami"></span><span style="font-size:11pt;font-family:Arial;font-weight:700">animating human-human interactions</span><span style="font-size:11pt;font-family:Gautami"></span><span style="font-size:11pt;font-family:ArialMT">. This is 100% funded research position for three years with annual revision. The successful candidate will be supervised by Dr Taku Komura (University of Edinburgh) and Facebook Reality Labs.</span><br></p><p><span style="font-size:11pt;font-family:ArialMT">The objectives of the project are to investigate how to let neural networks learn representations that can be used for human-object grasping and human-human interactions for virtual reality and computer <span class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-il">graphics</span> applications.</span></p><p><span style="font-size:11pt;font-family:ArialMT">This position is full-time, available from 1</span><span style="font-size:7pt;font-family:ArialMT;vertical-align:5pt">s</span><span style="font-size:11pt;font-family:Gautami"> </span><span style="font-size:7pt;font-family:ArialMT;vertical-align:5pt">t</span><span style="font-size:7pt;font-family:Gautami;vertical-align:5pt"> </span><span style="font-size:11pt;font-family:ArialMT">November 2018. The project duration is 36 months. The contract will be updated every 12 months.</span></p><p><span style="font-size:11pt;font-family:ArialMT">To apply please include:</span></p><ul style="list-style-type:none"><li style="margin-left:15px"><p><span style="font-size:11pt;font-family:ArialMT">● a brief statement of research interests (describing how past experience and future</span></p><p><span style="font-size:11pt;font-family:ArialMT">plans fit with the advertised position)</span></p></li><li style="margin-left:15px"><p><span style="font-size:11pt;font-family:ArialMT">● complete CV, including list of publications</span></p></li><li style="margin-left:15px"><p><span style="font-size:11pt;font-family:ArialMT">● the names and email addresses of two references</span></p><p><span style="font-size:11pt;font-family:ArialMT">Salary scale: UE07:
<span style="font-family:verdana,tahoma,arial,sans-serif;font-size:10.6667px;background-color:rgb(255,255,250)"> £33,199 - £39,609</span> <br>Informal enquires can be made to Taku Komura </span><span style="font-size:11pt;font-family:Gautami"></span><span style="font-size:11pt;font-family:ArialMT;color:rgb(17,85,204)"><a href="mailto:tkomura@ed.ac.uk" target="_blank">tkomura@ed.ac.uk</a></span><span style="font-size:11pt;font-family:Gautami;color:rgb(17,85,204)"> </span></p></li></ul></div></div></div></div><p style="color:rgb(0,0,0);font-family:-webkit-standard"><span style="font-size:11pt;font-family:Arial;font-weight:700">Application Procedure</span></p><p style="color:rgb(0,0,0);font-family:-webkit-standard"><span style="font-size:11pt;font-family:ArialMT">All applicants should apply online by accessing the link below and clicking the button at the bottom of the website.</span></p><p><font color="#000000" face="ArialMT"><span style="font-size:14.6667px"><a href="https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=045477" target="_blank">https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=045477</a></span></font><br></p><p style="color:rgb(0,0,0);font-family:-webkit-standard"><span style="font-family:ArialMT;font-size:11pt">The application process is quick and easy to follow, and you will receive email confirmation of safe receipt of your application. The online system allows you to submit a CV and other attachments.</span><br></p><p style="color:rgb(0,0,0);font-family:-webkit-standard"><span style="font-size:11pt;font-family:ArialMT">The closing date is 5pm on 29th October.</span></p><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-page" title="Page 1" style="color:rgb(0,0,0);font-family:-webkit-standard"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-section"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-layoutArea"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-column"><ol start="0" style="list-style-type:none"><li style="margin-left:15px"><p><span style="font-size:11pt;font-family:Arial;font-weight:700">Project information</span></p><p><span style="font-size:11pt;font-family:Arial;font-weight:700">Animating Hand-Object Interactions</span></p><p><span style="font-size:11pt;font-family:ArialMT">Research Purpose:<br>Automatically generating animation of characters to grasp and manipulate 3D objects, given its geometry, label and action required by the character.</span></p><p><span style="font-size:11pt;font-family:ArialMT">The research result will enable two types of applications in VR:</span></p></li><ol><li style="margin-left:15px;font-size:11pt;font-family:ArialMT"><p><span style="font-size:11pt">The animation model can be used as a strong prior for tracking virtual humans</span></p><p><span style="font-size:11pt">interacting with virtual or physical objects. Given low dimensional sensor input or tracking artifacts, the animation model can be used to fill in missing details and produce high fidelity output.</span></p></li><li style="margin-left:15px;font-size:11pt;font-family:ArialMT"><p><span style="font-size:11pt">The motion synthesis technique can be used to animate virtual assistants or companions that interact autonomously with human users in VR in social, productivity, or entertainment activities.</span></p></li></ol><p><span style="font-size:11pt;font-family:ArialMT">Research Description:<br>We will use a data-driven approach for achieving this task. More specifically, we will capture the motion of a person grasping and manipulating everyday objects, including mug cups, vases, bags, chairs, tables, boxes, etc. The motion of the objects will also be tracked. An </span><span style="font-family:ArialMT;font-size:11pt">optical motion capture system will be used for this purpose. The collected data will be used to generate synthetic depth images, which correspond to what the character is seeing during the action. An end-to-end deep neural network scheme will be established where the motion of the character at every frame is computed based on the state of the character and the object as captured from motion sensors or image sensors.</span></p></ol></div></div></div></div><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-page" title="Page 2" style="color:rgb(0,0,0);font-family:-webkit-standard"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-section"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-layoutArea"><div class="gmail-m_7876476027336898176m_-5594616285109752361gmail-m_1360597801192459986gmail-m_2046837218244755144m_-8922103553246932298gmail-column"><p><span style="font-size:11pt;font-family:ArialMT">The main research tasks will include (a) mapping low dimension or low quality input to high fidelity input, for both hands and the body, with object manipulation motions, (b) predict hand manipulation motions from initial states, including cluttered environment, novel objects, and both hands, (c) extend the second one to full body, and also explore strategic manipulations.</span></p><p><span style="font-size:11pt;font-family:Arial;font-weight:700">Animating Human-Human interaction</span></p><p><span style="font-size:11pt;font-family:ArialMT">Research Purpose:</span><span style="font-size:11pt;font-family:Gautami"> </span><span style="font-size:11pt;font-family:ArialMT">Developing a statistical human model that can talk to one another, or having physical interactions such as shaking hands, dancing, playing sports, etc.</span></p><p><span style="font-size:11pt;font-family:ArialMT">Research Description:<br>The purpose of this project is to develop a responsive character model that can interact with each other, or with a real human in real-time. Such a model will be useful for controlling a virtual character to interact with users wearing VR headsets, or animating the interactions between two virtual characters.</span></p><p><span style="font-size:11pt;font-family:ArialMT">We will target two types of interactions: (a) two persons talking to each other, and (b) two persons conducting physical interactions, such as shaking hands, dancing and playing sports.</span></p><p><span style="font-size:11pt;font-family:ArialMT">We will adopt a data-driven approach for achieving this task. The motions of two persons interacting with each other will be captured using an optical motion capture system equipped in the School of Informatics, University of Edinburgh. For the conversation task, we will also record the voices of the subjects. These data will be used to train a statistical model based on deep neural networks. In order to increase the amount of data, we will also do a data augmentation task, where the motions will be edited while the constraints and spatial relations are preserved.</span></p></div></div></div></div></div>