<div dir="ltr">Hi Harish, <div><br></div><div>I could offer my thoughts since I am working on head-free portable eye tracking.<b> Note:</b> I have never worked with monkeys so I do not know if they are opposed to bodily attachments such as wristbands and headbands etc.</div><div><br></div><div>A potential (and cheapest IMO) solution would be to retrofit a cricket helmet + ratchet add-on and placing the IR based cameras on the insides on the helmet. This will ensure that the monkey won't scratch the eye tracking cameras and that it remains snug. You could fit an IMU on the inside of the helmet - something like this:</div><div><br></div><div><div><img src="cid:ii_jjlr4mil0" alt="image.png" width="562" height="307"><br></div>Please ignore the Stereo cameras, this is a picture from my presentation (<b>Note:</b> no self promotion intended).</div><div><br></div><div>Now, the Pupil labs has excellent open source software that can provide excellent pupil detection. You could either use their cameras or make your own. <a href="https://pupil-labs.com/store/">https://pupil-labs.com/store/</a> </div><div>The best part about the pupil hardware is that their design is very easy to work with or break open and use the eye + scene cameras separately. <b>(Note:</b> not promoting Pupil labs, this is simply a suggestion towards your project). You can easily detach the scene and eye cameras and place them into the cricket helmet and their software will do the rest.</div><div><br></div><div>By extracting the head pose from the IMU and Gaze from the Eye Tracker, you could get a Gaze-In-World vector, however, it is in the nature of IMUs to drift. Hence, the GIW vector would require occasional correction (every 10 mins or so) - which brings me to my last point.</div><div><br></div><div>I'm assuming you'd want accurate gaze tracking on the screen (in pixel coordinates). You could easily display fiducial markers at the corners of the screens and find the mapping between screen coordinates and Point of Regard values on the Scene camera. This mapping (should be a 3x3 transformation matrix) can also be used to estimate head position in a <b>3D space relative</b> to the screen location. Everytime a monkey arrives near a screen, a program can identify these markers and estimate head position in 3D space and automatically align the IMU - this last part can be a little difficult to implement though! It took me ages to work with Quaternions and poses.</div><div><br></div><div>If you want body tracking as well, then a 2nd IMU, a little below the nape would be a good idea.</div><div><br></div><div>If you have the <span style="font-family:sans-serif;font-size:14px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="white-space:nowrap">₹ for it, you could also fit the monkeys in a tracking suit and use a motion capture system such as PhaseSpace </span></span><font face="sans-serif"><span style="font-size:14px;white-space:nowrap"><a href="http://phasespace.com/">http://phasespace.com/</a> - We have one in our lab and it works great!</span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap"><br></span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap">I hope I could help</span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap"><br></span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap">Rakshit</span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap"><br></span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap">P.S - You could also <i>create </i>your own MoCap system and that would be a fun project for an engineering graduate - using patterns of IR emitters and IR cameras around the room to triangulate the 3D position of a marker!</span></font></div><div><font face="sans-serif"><span style="font-size:14px;white-space:nowrap">P.S.S - There are many open source implementations of skeleton find using a 3D camera, but I don't know if they can be modified for monkeys.</span></font></div></div><br><div class="gmail_quote"><div dir="ltr">On Sat, Jul 14, 2018 at 12:14 PM Harish Katti <<a href="mailto:harish2006@gmail.com">harish2006@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<font color="#000000"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Dear all</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"> I am Harish, a post-doctoral fellow </span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">in Dr SP Arun's experimental vision group at the Centre for Neuroscience, Indian Institute of Science. I'm posting this to get feedback from researchers who have<span> </span></span><span class="m_2640035304614595797gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">tried automated eye-gaze/head-pose/body-pose tracking of freely moving<span> </span>non-human primates.<br><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">In our lab we are trying to setup eye tracking in monkeys without any<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">head restraints. Our plan is to have a behavioural arena where the<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="m_2640035304614595797gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">animal is not head-fixed and can come up to a touch screen and perform<span> </span><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">simple tasks in return for juice rewards. Since the animals are not<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="m_2640035304614595797gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">head-fixed, the eye-tracking needs to be done in a manner that can<span> </span><br>handle change in body and head pose. We have been evaluating a few<span> </span><br>commercial eye-tracking systems but find that the trackers have<span> </span><br>difficulty in finding the face/eyes. It will be nice to have your inputs<span> </span><br>on the following issues,<br><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">1. Is there a good eye tracking system that already has macaque face<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">appearance templates bulit in?</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">2. Are there any novel ways of placing the screen and tracker that<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">result in better eye-tracking? We have tried various ways of placing<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="m_2640035304614595797gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">trackers below the screen and at various distances from the animal.<br><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">3. Are there multi-camera eye-tracker systems that we can set-up from<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="m_2640035304614595797gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">different view points so that one or more can always have a clear view<span> </span><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">of the animal?</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">4. Do these systems have hardware input for behavioral event markers and<span> </span></span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span class="m_2640035304614595797gmail-im" style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">analog/digital outputs of eye-gaze data so that we can sync it with our<span> </span><br></span><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">neural data acquisition.</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">best,</span><br style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Harish</span></font><br></div>
_______________________________________________<br>
visionlist mailing list<br>
<a href="mailto:visionlist@visionscience.com" target="_blank">visionlist@visionscience.com</a><br>
<a href="http://visionscience.com/mailman/listinfo/visionlist_visionscience.com" rel="noreferrer" target="_blank">http://visionscience.com/mailman/listinfo/visionlist_visionscience.com</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><p style="margin:0in 0in 0.0001pt;font-size:11pt"><font face="arial, helvetica, sans-serif" color="#666666"><span style="font-size:10pt">Rakshit Kothari<br></span></font></p><p style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="font-size:10pt"><font face="arial, helvetica, sans-serif" color="#666666">Research & Teaching Assistant</font></span></p><p style="margin:0in 0in 0.0001pt"><font color="#666666" face="arial, helvetica, sans-serif"><span style="font-size:13.3333px">Perception for Action and Motion lab (PerForM)</span></font></p><p style="margin:0in 0in 0.0001pt"><span style="color:rgb(102,102,102);font-family:arial,helvetica,sans-serif;font-size:10pt">Center for Imaging Science</span></p><p style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="font-size:10pt"><font face="arial, helvetica, sans-serif" color="#666666">Rochester Institute of Technology</font></span></p></div></div></div></div></div>