<div dir="ltr"><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">[apologies if you received multiple copies]<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">------------------------------<wbr>------------------------------<wbr>-------------------------<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Call for papers<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">First Workshop on Large scale Emotion Recognition and Analysis (<span class="gmail-il">LERA</span>),<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">IEEE Automatic Faces & Gesture Recognition 2018, Xi’an, China.<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt"><u></u> <u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Extended Paper Submission Deadline: 28<sup>th</sup> January, 2018<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt"> <u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt"><a href="https://sites.google.com/view/lera2018" target="_blank">https://sites.google.com/view/<wbr>lera2018</a><u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">------------------------------<wbr>------------------------------<wbr>-------------------------<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">With the advancement in social computing, multimedia, and sensing technology, the amount of emotionally relevant data has grown enormously. It becomes crucial for the affective computing community to develop new methods for understanding emotion conveyed by the media and the emotion felt by the user at a large scale. This workshop invites researchers to submit their original work proposing methods to create data and new methodologies for large-scale analysis. Much development has been observed in the computer vision community after large-scale databases such as the ImageNet and MS COCO have been released. The first <span class="gmail-il">LERA</span> workshop at FG18 aims to transfer current research focus on small-scale, lab based environment to real-world, large-scale corpus. <u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt"> <u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Topics for the workshop include but are not limited to:<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">1. Large scale data collection and annotation<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">2. Large scale emotion recognition in the wild<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">3. Big data approaches for emotion recognition<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">4. Face tracking and affect analysis in videos<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">5. Group-level emotion recognition<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">6. Fusion techniques for audio-visual/physiological signals<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">7. Localization & identification of salient affect signals<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">8. Applications in education, entertainment & healthcare<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt"> <u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Timeline:<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Paper submission deadline: 28<sup>th</sup> January (Extended!)<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Paper acceptance notification: 7th February 2018<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Camera ready deadline: 15th February 2018<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt"> <u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Organizers:<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Abhinav Dhall, Indian Institute of Technology, Ropar<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Yelin Kim, State University of New York, Albany<u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="font-size:11pt">Qiang Ji, Rensselaer Polytechnic Institute</span></p><div><br></div><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-size:small">Abhinav Dhall, PhD<div>Assistant Professor,</div><div><span style="font-size:12.8px">Indian Institute of Technology, Ropar</span></div></div><div style="font-size:small"><span style="color:rgb(68,68,68);font-family:Roboto,Helvetica,Arial,sans-serif;font-size:13px">Webpage: <a href="https://goo.gl/5LrRB7" style="color:rgb(17,85,204)" target="_blank">https://goo.gl/5LrRB7</a> </span><br></div><div style="font-size:small"><span style="color:rgb(68,68,68);font-family:Roboto,Helvetica,Arial,sans-serif;font-size:13px">Google Scholar: </span><span style="color:rgb(68,68,68);font-family:Roboto,Helvetica,Arial,sans-serif;font-size:13px"><a href="https://goo.gl/iDwNTx" style="color:rgb(17,85,204)" target="_blank">https://goo.gl/iDwNTx</a> </span></div></div></div></div></div></div></div>
</div>