<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="">============================================================</div><div class="">PhD position at University of Caen, CNRS GREYC Laboratory</div><div class="">on "Frugal AI for image segmentation"</div><div class="">============================================================</div><div class=""><br class=""></div><div class="">Topic</div><div class="">-----</div><div class=""><br class=""></div><div class="">The advent of deep learning has been a real tsunami in the machine learning community, leading to results, especially in computer vision, that we would not have expected a few years before. For many vision tasks, the performances of deep learning algorithms have become equivalent or even superior to human performances.</div><div class=""><br class=""></div><div class="">However, these results have been obtained at the cost of ever-increasing use of resources such as: the size of the model, the time and energy needed to train them, with ever-larger databases and ever-higher annotation requirements. This increase in resource requirements has major drawbacks, related to the impact that ML has on the environment, the difficulty to implement models on embedded architectures, or the challenges raised when models have to be trained on tasks for which little training data is available.</div><div class=""><br class=""></div><div class="">These observations have very recently led some authors [1, 2] to introduce the concept of frugal machine learning and to define what a frugal machine learning methodology should be, and how to evaluate frugality.</div><div class=""><br class=""></div><div class="">In this dissertation, we will study frugality in the context of AI for image segmentation [3]. The objective will be to propose frugal models that can provide efficient results while being structured to provide a reduced time and space complexity. More precisely, we will consider several aspects of frugality and take inspiration from the following recent works : i) the conception of lightweight models by design [4, 5] ii) the compression of existing models [6] iii) the pruning of existing segmentation models [4, 7] iv) frugality on image label and zero shot image segmentation [8, 9].</div><div class=""><br class=""></div><div class="">Qualifications</div><div class="">--------------</div><div class=""><br class=""></div><div class="">Candidates must have an MSc or engineering degree in a field related to computer science, electrical engineering, or applied mathematics, with strong programming skills (in particular with deep learning frameworks). Experience with image processing will be a plus. Candidates are expected to have abilities to write scientific reports and communicate research results at conferences in English.</div><div class=""><br class=""></div><div class="">Information and application</div><div class="">---------------------------</div><div class=""><br class=""></div><div class="">The position is starting as soon as possible with a salary of 32 kEuros gross, and will be located in Caen, France.</div><div class=""><br class=""></div><div class="">Applications should include the following documents in electronic format: i) A short motivation letter stating why you are interested in this project, ii) A detailed CV describing your past research background related to the position iii) The transcripts for master degrees. iv) The contact information for three references (do not include the reference letters with your applications as we will only ask for the reference letters for short-listed candidates).</div><div class=""><br class=""></div><div class="">Please send your application package to <a href="mailto:frederic.jurie@unicaen.fr" class="">frederic.jurie@unicaen.fr</a> and <a href="mailto:olivier.lezoray@unicaen.fr" class="">olivier.lezoray@unicaen.fr</a> </div><div class=""><br class=""></div><div class="">Ideally located in the heart of Normandy, two hours from Paris and just 10 minutes away from the beaches, Caen, William the Conqueror’s hometown, is a lively and dynamic city.</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">References</div><div class="">----------</div><div class=""><br class=""></div><div class="">[1] Lingjiao Chen, Matei Zaharia, and James Y. Zou, “FrugalML : How to use ML Prediction APIs more accurately and cheaply,” in Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, Eds., 2020.</div><div class="">[2] Mikhail Evchenko, Joaquin Vanschoren, Holger H. Hoos, Marc Schoenauer, and Michèle Sebag, “Frugal Machine Learning,” arXiv :2111.03731 [cs, eess], Nov. 2021.</div><div class="">[3] Shervin Minaee, Yuri Y. Boykov, Fatih Porikli, Antonio J Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos, “Image segmentation using deep learning : A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021.</div><div class="">[4] Linjie Wang, Quan Zhou, Chenfeng Jiang, Xiaofu Wu, and Longin Jan Latecki, “DRBANET: A Lightweight Dual-Resolution Network for Semantic Segmentation with Boundary Auxiliary,” arXiv :2111.00509 [cs], Oct. 2021.</div><div class="">[5] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, and Ping Luo, “Segformer : Simple and efficient design for semantic segmentation with transformers,” CoRR, vol. abs/2105.15203, 2021.</div><div class="">[6] Moonjung Eo, Suhyun Kang, and Wonjong Rhee, “A Highly Effective Low-Rank Compression of Deep Neural Networks with Modified Beam-Search and Modified Stable Rank,” arXiv :2111.15179 [cs], Nov. 2021.</div><div class="">[7] Wei He, Meiqing Wu, Mingfu Liang, and Siew-Kei Lam, “Cap : Context-aware pruning for semantic segmentation,” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 959–968.</div><div class="">[8] Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Han Hu, and Xiang Bai, “A Simple Baseline for Zero-shot Semantic Segmentation with Pre-trained Vision-language Model,” arXiv :2112.14757 [cs], Dec. 2021.</div><div class="">[9] Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, and Patrick Pérez, “Zero-shot semantic segmentation,” in Advances in Neural Information Processing Systems 32 : Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, Eds., 2019, pp. 466–477.</div><div class=""><br class=""></div><div class="">A pdf version of the position is available at</div><div class=""><a href="https://lezoray.users.greyc.fr/tmp/PhD_FrugalAI.pdf" class="">https://lezoray.users.greyc.fr/tmp/PhD_FrugalAI.pdf</a></div><div class=""><br class=""></div><div class="">
<hr class="">
<table width="500" cellspacing="0" cellpadding="0" class="">
<tbody class=""><tr class=""> <td style="vertical-align: top; text-align:left;color:#000000;font-size:12px;font-family:helvetica, arial;; text-align:left" class="">
<span class=""><b class=""><span style="color:#000000;font-size:15px;font-family:helvetica, arial" class="">Olivier LÉZORAY</span></b><br class=""> Full Professor of Computer Science</span> <br class=""><br class="">
<span style="color:#000000;font-size:15px;font-family:helvetica, arial" class=""><b class="">University of Caen Normandy</b>
<table class="">
<tbody class=""><tr style="font-size:8pt;font-family:helvetica, arial" class="">
<td class="">West Normandy Institute of Technology<br class="">Multimedia and Internet Department<br class="">F-50000 SAINT-LÔ
<a href="tel:+33 2 33 77 55 14" style="color:#3388cc;text-decoration:none" class="">+33(0)233775514</a>
</td>
<td class="">
GREYC UMR CNRS 6072<br class="">Image Team - ENSICAEN<br class="">6 Bd. Marechal Juin<br class="">F-14000 CAEN
<a href="tel:+33 2 31 45 29 27" style="color:#3388cc;text-decoration:none" class="">+33(0)231452927</a>
</td>
</tr>
</tbody></table>
</span>
<table cellpadding="0" border="0" class=""><tbody class=""><tr class=""><td style="padding-right:4px" class=""><a href="https://linkedin.com/in/olivier-lezoray-0983114/" style="display: inline-block" class=""><img width="30" height="30" src="https://s1g.s3.amazonaws.com/7583fe34c2ad59e0367b6f4773f07bf3.png" alt="LinkedIn" style="border:none" class=""></a></td><td style="padding-right:4px" class=""><a href="skype:olezoray" style="display: inline-block" class=""><img width="30" height="30" src="https://s1g.s3.amazonaws.com/7b0d8c63303d92a487c23d47895fec48.png" alt="Skype" style="border:none" class=""></a></td></tr></tbody></table><a href="https://lezoray.users.greyc.fr" style="text-decoration:none;color:#3388cc" class="">https://lezoray.users.greyc.fr</a><br class=""> </td> <td style="border-right:solid #000000 2px" width="12" class=""></td>
<td width="138" style="vertical-align:top;padding-left:10px" class=""><a style="display:inline-block" href="https://unicaen.fr" class=""><img style="border:none" width="138" src="https://s1g.s3.amazonaws.com/a85db239c732c19b92021c4f24668e70.png" class=""></a></td>
</tr>
</tbody></table>
</div>
<br class="">
<br>
<br></body></html>