<html><head></head><body><div style="color:#000; background-color:#fff; font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id="yui_3_16_0_ym19_1_1503515198528_11622"><font face="Calibri,Arial,Helvetica,sans-serif" size="2" id="yui_3_16_0_ym19_1_1503515198528_11623"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_11624">Dear Colleagues,</span></font></div><div id="yui_3_16_0_ym19_1_1503515198528_11625"><font face="Calibri,Arial,Helvetica,sans-serif" size="2" id="yui_3_16_0_ym19_1_1503515198528_11626"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_11627"><br id="yui_3_16_0_ym19_1_1503515198528_11628"></span></font></div><div id="yui_3_16_0_ym19_1_1503515198528_11629"><font face="Calibri,Arial,Helvetica,sans-serif" size="2" id="yui_3_16_0_ym19_1_1503515198528_11630"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_11631">Apologies for cross-posting.</span></font><font face="Calibri,Arial,Helvetica,sans-serif" size="2" id="yui_3_16_0_ym19_1_1503515198528_11632"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_11633"><br id="yui_3_16_0_ym19_1_1503515198528_11634"></span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_11635"><font face="Calibri,Arial,Helvetica,sans-serif" size="2" id="yui_3_16_0_ym19_1_1503515198528_11636"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_11637"><br id="yui_3_16_0_ym19_1_1503515198528_11638"></span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_11635"><font face="Calibri,Arial,Helvetica,sans-serif" id="yui_3_16_0_ym19_1_1503515198528_16443"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_16440">British machine Vision Association (BMVA) One-Day Symposium on Human Activity Recognition and Monitoring</span><br></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_11635"><font face="Calibri,Arial,Helvetica,sans-serif" id="yui_3_16_0_ym19_1_1503515198528_36307"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_36305"><br id="yui_3_16_0_ym19_1_1503515198528_36304"></span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_11635"><font face="Calibri,Arial,Helvetica,sans-serif" id="yui_3_16_0_ym19_1_1503515198528_21023"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_21022">Location: BCS (British Computer Society) London, November 8, 2017 </span><br></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_11635"><a href="http://www.bmva.org/meetings" class="" id="yui_3_16_0_ym19_1_1503515198528_24555" style="font-size: 16px; font-family: Calibri, Arial, Helvetica, sans-serif; background-color: rgb(255, 255, 255);">http://www.bmva.org/meetings</a><br></div><div id="yui_3_16_0_ym19_1_1503515198528_26896"><br></div><div id="yui_3_16_0_ym19_1_1503515198528_26306" dir="ltr"><span style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_32778">Abstract submission deadline: 28th September 2017 </span></div><div id="yui_3_16_0_ym19_1_1503515198528_26306" dir="ltr"><a href="https://goo.gl/nXXy3f" target="_blank" rel="noopener noreferrer nofollow" style="background-color: rgb(255, 255, 255); color: rgb(2, 120, 184); text-decoration-line: none; font-family: "Benton Sans", "Helvetica Neue", Helvetica, Roboto, Arial, sans-serif; font-size: 15px; letter-spacing: 0.5px;" id="yui_3_16_0_ym19_1_1503515198528_32779" class="">https://goo.gl/nXXy3f</a><span style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_32780"> </span></div><div id="yui_3_16_0_ym19_1_1503515198528_33961"><br></div><div id="yui_3_16_0_ym19_1_1503515198528_37440"><span id="yui_3_16_0_ym19_1_1503515198528_38635" style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;">Confirmed </span><font face="Calibri, Arial, Helvetica, sans-serif" id="yui_3_16_0_ym19_1_1503515198528_38636"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_38637">Keynote Speakers:</span></font><br></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_38638"><font face="Calibri, Arial, Helvetica, sans-serif" id="yui_3_16_0_ym19_1_1503515198528_38639"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_38640">Prof David Hogg (University of Leeds)</span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_38638"><font face="Calibri, Arial, Helvetica, sans-serif" id="yui_3_16_0_ym19_1_1503515198528_39830"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_39829">Dr Alessandro Vinciarelli (University of Glasgow)</span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_38638"><font face="Calibri, Arial, Helvetica, sans-serif" id="yui_3_16_0_ym19_1_1503515198528_39833"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_39832">Prof Ian Craddock (University of Bristol)</span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_38638"><font face="Calibri, Arial, Helvetica, sans-serif" id="yui_3_16_0_ym19_1_1503515198528_40972"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_40971">Prof Yiannis Demiris (Imperial College London)</span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_38638"><font face="Calibri, Arial, Helvetica, sans-serif"><span style="font-size: 16px;"><br></span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_38638"><font face="Calibri, Arial, Helvetica, sans-serif" id="yui_3_16_0_ym19_1_1503515198528_57486"><span style="font-size: 16px;" id="yui_3_16_0_ym19_1_1503515198528_57485">*********</span></font></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_43385"><span style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43386">This BMVA one-day meeting will present state-of-the-art developments in Human Sensing and is motivated by the attention of several computer science communities due to its connection to different fields of study. Therefore, human activity analysis and recognition has become a research area of great interest, as its strength in providing potential applications such as intelligent environments (smart home, smart vehicle, smart care home, smart factory, etc.), security and surveillance, human-robot collaborative tasks, human-machine interactions, assistive technologies, bio-mechanical study of athletes, physical activity and sedentary behaviour, virtual and augmented reality, physical therapy and rehabilitation and many more. </span></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_43385"><span style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;"><br></span></div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43387">Recent advances in visual, depth and inertia sensors, algorithms for data/signal acquisition and processing have led to advances in detection, tracking, analysis of human activities, as well as fundamental understanding of long-term modelling and recognition of human behaviour in complex scenarios. Human activity/behaviour includes varied modalities and numerous scales including single person, small group and larger group. </div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43387"><br></div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43388">This one-day meeting will be dedicated to bring together leading researchers, at various levels in their career, with expertise or strong interest in technical advances in activity analysis and recognition. The meeting also aims to bring together a collection of latest approaches in this domain. We hope the meeting will stimulate future research, both theoretical and practical perspectives to stimulate further advances in the field. </div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43388"><br></div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43389">We welcome contributions to this workshop in the form of oral presentations, posters and demos. Suggested topics include, but are not limited to:</div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43390">• Multi-modal (visual, depth and inertia sensors) human activity modelling and recognition<br id="yui_3_16_0_ym19_1_1503515198528_43391">• Human motion analysis<br id="yui_3_16_0_ym19_1_1503515198528_43392">• Scene analysis and understanding<br id="yui_3_16_0_ym19_1_1503515198528_43393">• Real-time activity recognition and monitoring<br id="yui_3_16_0_ym19_1_1503515198528_43394">• Social signal processing<br id="yui_3_16_0_ym19_1_1503515198528_43395">• Emotion recognition <br id="yui_3_16_0_ym19_1_1503515198528_43396">• Single-user, multi-user and group activity recognition<br id="yui_3_16_0_ym19_1_1503515198528_43397">• Human pose recognition<br id="yui_3_16_0_ym19_1_1503515198528_43398">• People detection and tracking<br id="yui_3_16_0_ym19_1_1503515198528_43399">• Activity modelling and recognition through logic and reasoning<br id="yui_3_16_0_ym19_1_1503515198528_43400">• Long-term modelling and recognition<br id="yui_3_16_0_ym19_1_1503515198528_43401">• Data/pattern mining based approach to activity recognition<br id="yui_3_16_0_ym19_1_1503515198528_43402">• Context-aware activity monitoring<br id="yui_3_16_0_ym19_1_1503515198528_43403">• Activity recognition for personalised services</div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" id="yui_3_16_0_ym19_1_1503515198528_43390"><br></div><div style="color: rgb(33, 33, 33); font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; white-space: pre-wrap;" dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_43404">The above list is not limited. Therefore, you are welcome to submit an abstract on human activity recognition and monitoring related research. The piece of research work could be, recently published, in progress, or novel. You may also include links or pointers to web-based materials, demonstrations or papers giving more details. We encourage submissions from student, academic and industry, including interdisciplinary work and work from those outside of the mainstream computer vision community.</div><div id="yui_3_16_0_ym19_1_1503515198528_36301" dir="ltr"><br></div><div id="yui_3_16_0_ym19_1_1503515198528_36301" dir="ltr">Many thanks and kind regards,</div><div id="yui_3_16_0_ym19_1_1503515198528_36301" dir="ltr"><br></div><div id="yui_3_16_0_ym19_1_1503515198528_36301" dir="ltr">Chairs: Ardhendu Behera (Edge Hill University), Nicola Bellotto (University of Lincoln) & Charith Abhayaratne (University of Sheffield)<br></div><div dir="ltr" id="yui_3_16_0_ym19_1_1503515198528_11635"><font face="Calibri,Arial,Helvetica,sans-serif"><span style="font-size: 16px;"><br></span></font></div></div></body></html>