<div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Apologies for cross-posting</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">*******************************</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">* International Journal of Computer Vision <span class="gmail-m_3363669033080707004gmail-il">Special</span> <span class="gmail-m_3363669033080707004gmail-il">Issue</span> on Efficient Visual Recognition</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">* Website:</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><a href="http://www.ee.oulu.fi/~lili/IJCVSIEVR2018.htm" target="_blank">http://www.ee.oulu.fi/~lili/IJCVSIEVR2018.htm</a></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">* Date:</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Submission of full papers: <font color="#ff0000"><b>April 20th, 2019 [NEW]</b></font></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">* Guest Editors:</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">- Li Liu, National University of Defense Technology, China & University of Oulu, Finland</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">- Matti Pietikäinen, University of Oulu, Finland</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">- Jie Qin, ETH Zürich, Switzerland</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">- Jie Chen, University of Oulu, Finland</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">- Wanli Ouyang, University of Sydney, Australia</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">- Luc Van Gool,  ETH Zürich, Switzerland</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">============================ Scope =================================</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Visual recognition plays a central role in computer vision. A large amount of vision tasks fundamentally rely on the ability to recognize and localize faces, people, objects, scenes, places, attributes, actions and relations. Visual recognition thus touches many areas of artificial intelligence and information retrieval, such as image search, visual surveillance, video data mining, question answering, autonomous driving and robotic interactions.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Feature representation is the core of visual recognition. Milestone handcrafted feature descriptors such as Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) have dominated visual recognition for years until the turning point in 2012 when Deep Convolutional Neural Networks (DCNN) achieved a record-breaking image classification accuracy. Since DCNN entered the scene, visual recognition has been experiencing a revolution and tremendous progress (such as enabling superhuman accuracy) has been achieved because of the availability of large visual datasets and GPU computing resources. Hand in hand went, the development of deeper and larger DCNNs that could automatically learn more and more powerful feature representations with multiple levels of abstraction from big data.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">In many real-world applications, recognizing efficiently is as critical as recognizing accurately. Significant progress has been made in the past few years to boost the accuracy levels of visual recognition, but existing solutions often rely on computationally expensive feature representation and learning approaches, which are too slow for numerous applications. In addition to the opportunities they offer, the large visual datasets also lead to the challenge of scaling up while retaining the efficiency of learning approaches and representations for both handcrafted and deeply learned features.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">In addition, given sufficient amount of annotated visual data, some existing features, especially DCNN features, have been shown to yield high accuracy for visual recognition. However, there are many applications where only limited amounts of annotated training data can be available or collecting labeled training data is too expensive. Such applications impose great challenges to many existing features. Finally, with the prevalence of social media networks and mobile/wearable devices which have limited computational capabilities and storage space, the demand for sophisticated mobile/wearable device applications in handling visual big data recognition are rising. In such applications, real-time performance is of utmost importance to users, since no one is willing to spend time waiting nowadays. Therefore, there is a growing need for developing visual features that are fast to compute, memory efficient, and yet exhibiting good discriminability and robustness for visual recognition.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">============================ Topics ================================</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">We encourage researchers to study and develop novel efficient visual recognition approaches that are computationally efficient, memory efficient, and yet exhibiting good recognition accuracy. We aim to solicit original contributions that: (1) present state of the art theories related to efficient visual recognition; (2) explore novel algorithms and applications; (3) survey the recent progress in this field; and (4) establish benchmark datasets.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">The list of possible topics includes, but is not limited to:</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Hashing/binary coding and its related applications</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Compact and efficient convolutional neural networks</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Efficient handcrafted feature design</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Fast features tailored to wearable/mobile devices</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Efficient dimensionality reduction and feature selection</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Sparse representation and its related applications</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Evaluations of current handcrafted descriptors and deep learning based features</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ DCNN compression/quantization/binarization</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Hybrid methods combining strengths of handcrafted and learning based approaches</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Efficient feature learning for applications with limited amounts of annotated training data</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">§ Efficient approaches to increase the invariance of DCNN</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Priority will be given to papers with high novelty and originality for research papers, and to papers with high potential impact for survey/overview papers.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">======================= Paper Submission and Review ===========================</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Authors are encouraged to submit original work that has not appeared in, nor is in consideration by, other journals. Papers extending previously published conference papers can be submitted, as long as the journal submission provides a significant contribution beyond the conference paper (The overlap is described clearly at the beginning of the journal submission).</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Manuscripts will be subject to a peer-reviewing process and must conform to the author guidelines available on the <span class="gmail-m_3363669033080707004gmail-il">IJCV</span> website at Instructions for Authors on the right panel.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Authors need to submit full papers online through the <span class="gmail-m_3363669033080707004gmail-il">IJCV</span> submission site at <a href="http://visi.edmgr.com" target="_blank">http://visi.edmgr.com</a>, selecting the choice that indicates this <span class="gmail-m_3363669033080707004gmail-il">special</span> <span class="gmail-m_3363669033080707004gmail-il">issue</span> Efficient Visual Recognition.</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">*******************************</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">We look forward to your contributions!</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Best Regards,</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Jie Qin</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Research Scientist,</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Inception Institute of Artificial Intelligence,</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">Abu Dhabi, United Arab Emirates</div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><img id="gmail-<image003.png@01D4D33B.1BF66210>" src="blob:https://mail.google.com/bccf6747-a7f0-4e40-9bb6-8b672e36fe4a" alt="image003.png" align="left" hspace="12" class="gmail-Apple-web-attachment" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Calibri, sans-serif; font-size: 14.666666984558105px; width: 2.8083in; height: 0.8916in;"></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div></div></div></div>