<div dir="ltr"><div dir="ltr"><table style="color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif;font-size:13px;width:1200px"><tbody><tr><td style="padding:0pt;background-color:rgb(216,240,239);text-align:center;vertical-align:top"><table style="width:1196px"><tbody><tr><td style="padding:0pt;background-color:rgb(255,255,255);vertical-align:top"><table style="width:1192px"><tbody><tr><td style="padding:0pt;background-color:rgb(86,191,181);vertical-align:middle"><p style="text-align:left;margin:0px 0pt 12px;max-width:650pt;line-height:1.4;font-size:11pt;padding-bottom:5pt"><span style="color:rgb(255,255,255)">   IEEE international workshop on in conjunction with <a href="http://www.icme2019.org/">2019 ICME</a></span></p><p style="text-align:left;margin:0px 0pt 12px;max-width:650pt;line-height:1.4;font-size:11pt;padding-bottom:5pt"><span style="font-size:16px"><strong><span style="color:rgb(255,255,255)">   2nd Workshop on <a href="https://web.northeastern.edu/smilelab/facesmm19/index.html">Faces in Multimedia</a> (FacesMM)</span></strong></span></p><p style="text-align:left;margin:0px 0pt 12px;max-width:650pt;line-height:1.4;font-size:11pt;padding-bottom:5pt"><span style="color:rgb(255,255,255)">   -- To Automatically Synthesize, Recognize, Understand Faces in the Wild</span></p></td></tr></tbody></table></td></tr></tbody></table></td></tr></tbody></table><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;color:rgb(51,51,51);padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><br></h2><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><font color="#ff0000">Call For Papers</font></h2><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><span style="font-family:Lato">There has been remarkable advances in facial recognition technologies the past several years due to the rapid development of deep learning and large-scale, labeled face collections. Thus, there are now evermore challenging image and video collections to solve emerging problems in the fields of faces and multimedia. In parallel to face recognition, researchers continue to show an increasing interest in topic of face synthesis. Works have been done using imagery, videos, and various other modalities (e.g., hand sketches, 3D models, view-points): some focus on the individual or individuals (e.g., with/without makeup, age varying, predicting a child appearance from parents, face swapping), while others leverage generative modeling for semi-supervised learning of recognition or detection systems. Besides, generative modeling are methodologies to automatically interrupt and analyze faces for a better understanding of visual context (e.g., relationships of persons in a photo, age estimation, occupation recognition). It is an age where many creative approaches and views are proposed for face synthesizing. Also, various advances are being made in other technologies involving automatic face understanding: face tracking (e.g., landmark detection, facial expression analysis, face detection), face characterization (e.g., behavioral understanding, emotion recognition), facial characteristic analysis (e.g., gait, age, gender and ethnicity recognition), group understanding via social cues (e.g., kinship), and visual sentiment analysis (e.g., temperament, arrangement). The ability to model with high certainty has significant value in both the scientific communities and the commercial market, with applications spanning topics of HCI, social-media analytics, video indexing, visual surveillance, and online vision. </span><br><br><em>The 2nd Workshop on Faces in Multimedia</em><span style="font-family:Lato"> (FacesMM) serves a forum for researchers to review the recent progress the automatic face understanding and synthesizing in multimedia. </span><strong>Special interest will be given to generative-based modeling.</strong><span style="font-family:Lato"> The workshop will include two keynotes, along with peer-reviewed papers (oral and poster). Novel high-quality contributions are solicited on the following topics:</span></p><ul style="max-width:650pt;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Face synthesis and morphing; works on generative modeling;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Soft biometrics; profiling faces: age, gender, ethnicity, personality, kinship, occupation, and beauty ranking;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Deep learning practice for social face problems with ambiguity including kinship verification, family recognition and retrieval;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Discovery of the social groups from faces and the context;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Mining social face relations through metadata as well as visual information;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Tracking and extraction and analysis of face models captured by mobile devices;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Face recognition in low-quality or low-resolution video or images;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Novel mathematical models & algorithms: sensors & modalities for face, body pose, action representation;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Analysis and recognition for cross-domain social media;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Novel social applications involving detection, tracking & recognition of faces;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Face analysis for sentiment analysis in social media;</p></li><li><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt">Other applications involving face analysis in social media content.</p></li></ul><h2 style="text-align:left;margin:7pt 0px 6pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;border-radius:3px"><font color="#333333" face="verdana, sans-serif" size="2"><span style="font-weight:normal">Workshop webpage: <a href="https://web.northeastern.edu/smilelab/facesmm19/index.html">https://web.northeastern.edu/smilelab/facesmm19/index.html</a></span></font><br></h2><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><font color="#ff0000" style="background-color:rgb(255,255,255)">Previous FacesMM Workshops</font></h2><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><span style="color:rgb(42,42,42)"><span style="font-family:Lato">Take a look back at last year's FacesMM workshop, </span></span><span style="outline:none;color:rgb(52,152,219);text-decoration-line:none;border-bottom:1px solid rgb(241,241,241);font-family:Lato;word-spacing:0.2px"><font size="2"><a href="https://web.northeastern.edu/smilelab/FacesMM2018/">https://web.northeastern.edu/smilelab/FacesMM2018/ </a></font></span></p><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;color:rgb(51,51,51);padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><br></h2><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><font color="#ff0000">Important Dates</font></h2><table cellpadding="1" border="0" cellspacing="1" style="color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif;font-size:13px;width:500px"><tbody><tr><td style="padding:0pt"><strong><font size="2">1 March 2019</font></strong></td><td style="padding:0pt"><font size="2">Submission Deadline</font></td></tr><tr><td style="padding:0pt"><strong><font size="2">20 March 2019</font></strong></td><td style="padding:0pt"><font size="2">Notification</font></td></tr><tr><td style="padding:0pt"><strong><font size="2">15 April 2018</font></strong></td><td style="padding:0pt"><font size="2">Camera-Ready Due</font></td></tr></tbody></table><h2 style="margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;color:rgb(51,51,51);padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px;text-align:center"><br></h2><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><font color="#ff0000">Author Guidelines</font></h2><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif">Submissions handled via CMT website: <span style="color:rgb(52,152,219)"><a href="https://cmt3.research.microsoft.com/ICME2019W/Submission/Index">https://cmt3.research.microsoft.com/ICME2019W/Submission/Index</a> </span>   <br><br>Following the guideline of ICME2019: <span style="color:rgb(52,152,219)"><a href="http://www.icme2019.org/author_info#General_Information">http://www.icme2019.org/author_info#General_Information</a></span></p><ul style="max-width:650pt;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><li>6 pages (including references)</li><li>Anonymous</li><li>Using ICME template</li></ul><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;color:rgb(51,51,51);padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><br></h2><h2 style="text-align:left;margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><font color="#ff0000">Organizers</font></h2><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><strong>Yun Fu</strong>, Northeastern University, <span style="color:rgb(52,152,219)"><a href="http://www1.ece.neu.edu/~yunfu/">http://www1.ece.neu.edu/~yunfu/</a></span></p><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><strong>Joseph Robinson</strong>, Northeastern University, <span style="color:rgb(52,152,219)"><a href="http://www.jrobsvision.com">http://www.jrobsvision.com</a></span></p><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><strong>Ming Shao</strong>, University of Massachusetts (Dartmouth), <span style="color:rgb(52,152,219)"><a href="http://www.cis.umassd.edu/~mshao/">http://www.cis.umassd.edu/~mshao/</a></span></p><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;color:rgb(0,0,0);font-family:Verdana,Arial,Helvetica,sans-serif"><strong>Siyu Xia</strong>, Southeast University (China), Nanjing, <span style="background-color:rgb(252,252,252);font-family:Lato"><a href="http://www.escience.cn/people/siyuxia/"><span style="color:rgb(52,152,219)">http://www.escience.cn/people/siyuxia/</span><span style="color:rgb(52,152,219)"> </span></a></span></p><h2 style="margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;color:rgb(51,51,51);padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><br></h2><h2 style="margin:7pt 0px 6pt;font-size:12pt;border-bottom:2px solid rgb(0,0,0);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;padding:4pt 0pt 6pt;font-family:verdana,sans-serif;border-radius:3px"><font color="#ff0000">Contact</font></h2><p style="margin:0px 0pt 12px;max-width:650pt;line-height:1.4;padding-bottom:5pt;font-family:Verdana,Arial,Helvetica,sans-serif"><strong style=""><font color="#000000">Joseph Robinson</font></strong><span style="color:rgb(0,0,0);font-size:11pt"> (</span><span style="color:rgb(52,152,219)"><a href="mailto:robinson.jo@husky.neu.edu">robinson.jo@husky.neu.edu</a></span><span style="color:rgb(0,0,0);font-size:11pt">)</span><br><span style="color:rgb(0,0,0);font-size:11pt">Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA</span><br><br><b style=""><font color="#000000">Ming Shao</font></b><span style="color:rgb(0,0,0);font-size:11pt"><b> </b>(</span><span style="color:rgb(52,152,219)"><a href="mailto:mshao@umassd.edu">mshao@umassd.edu</a></span><span style="color:rgb(0,0,0);font-size:11pt">) </span><br><span style="color:rgb(0,0,0);font-size:11pt">Computer and Information Science, University of Massachusetts Dartmouth, Dartmouth, MA, USA</span></p></div></div>