<div dir="ltr">

<font color="#000000"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Dear all</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">      I am Harish, a post-doctoral fellow </span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">in Dr SP Arun's experimental vision group at the Centre for Neuroscience, Indian Institute of Science. I'm posting this to get feedback from researchers who have<span> </span></span><span class="gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">tried automated eye-gaze/head-pose/body-pose tracking of freely moving<span> </span>non-human primates.<br><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">In our lab we are trying to setup eye tracking in monkeys without any<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">head restraints. Our plan is to have a behavioural arena where the<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">animal is not head-fixed and can come up to a touch screen and perform<span> </span><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">simple tasks in return for juice rewards. Since the animals are not<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">head-fixed, the eye-tracking needs to be done in a manner that can<span> </span><br>handle change in body and head pose. We have been evaluating a few<span> </span><br>commercial eye-tracking systems but find that the trackers have<span> </span><br>difficulty in finding the face/eyes. It will be nice to have your inputs<span> </span><br>on the following issues,<br><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">1. Is there a good eye tracking system that already has macaque face<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">appearance templates bulit in?</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">2. Are there any novel ways of placing  the screen and tracker that<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">result in better eye-tracking? We have tried various ways of placing<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">trackers below the screen and at various distances from the animal.<br><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">3. Are there multi-camera eye-tracker systems that we can set-up from<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">different view points so that one or more can always have a clear view<span> </span><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">of the animal?</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">4. Do these systems have hardware input for behavioral event markers and<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">analog/digital outputs of eye-gaze data so that we can sync it with our<span> </span><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">neural data acquisition.</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">best,</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Harish</span></font><br></div>