<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class="">Post-doctorate Position : Face inference from voice</b></div><div class=""><br class=""></div><div class=""><b class="">Context</b></div><div class="">--------------</div><div class="">The laboratory GREYC UMR CNRS of the University of Caen Normandy and ENSICAEN (Caen, France), in colla- boration with the company United Biometrics (Caen, France), is launching a call for applications for a postdoctoral research position (duration of 3 years) on face inference from voice. The work will be done within the framework of the BIOPOP (BIOmétrie Pour les Opérations) project funded by the AID (Agence Innovation Défense).</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class="">Missions</b></div><div class="">--------------</div><div class="">The goal is to infer information about a person’s face from a raw recording of his or her voice. Recent preliminary works [1, 2, 3, 4] have shown the feasibility of this inference. The aim is not to generate the exact face corresponding to the voice, but to generate a face that allows to highlight the main discriminating characteristics of the face (gender, age, ethnicity, craniofacial attributes). This can be of great interest for security applications. Indeed, inferring a face from a voice can then allow an operator to perform different tasks from the inferred face image. We can mention for example : the verification of the coherence between a voice and a face, the search of the inferred face in a database.</div><div class="">The post-doctoral fellow will carry out a precise state of the art of face inference methods from voice. He will implement a state of the art solution based on generative models. Finally, he will develop a new and more efficient generative model that can guarantee the generation of a realistic face to meet the expectations of the BIOPOP project.</div><div class=""><br class=""></div><div class=""><b class="">Skills</b></div><div class="">--------------</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Ph.D. in computer science and specialized in machine learning.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Solid knowledge of deep learning, computer vision.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Publications in major conferences in the field.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Strong software development/programming skills, especially in Python/PyTorch.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Good written and verbal communication skills are required, the candidate must be fluent in French and proficient in written English.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Interpersonal skills and the ability to work individually or as a member of a project team are recommended.</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class="">General Information</b></div><div class="">--------------</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Research Laboratory : The laboratory GREYC (UMR CNRS 6072) is a Joint Research Unit in digital sciences under the supervision of ENSICAEN, CNRS and the University of Caen Normandy (UNICAEN). The work will be carried out within the Image team whose research activities are focused on the development of new methods of processing and analysis of signals/images/videos.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Place : Caen (France), located in the Normandy region, near the sea and about 240 km west of Paris, the city still has many old neighborhoods, a population of about 120,000 and an agglomeration of about 250,000 inhabitants, including more than 30,000 students at the University.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Duration : 36 months.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Salary : About 2900€ gross per month.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>To apply : Interested candidates should submit (by email, in a single pdf file) their curriculum vitae, list of publications, a cover letter, and contact information for three references (do not include letters of refe- rence with your applications as we will only request them from short-listed candidates). Applications will be accepted until the position is filled. The position will begin in early October.</div><div class="">—<span class="Apple-tab-span" style="white-space:pre"> </span>Contact / supervision :</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>—<span class="Apple-tab-span" style="white-space:pre"> </span>Olivier Lézoray (<a href="mailto:olivier.lezoray@unicaen.fr" class="">olivier.lezoray@unicaen.fr</a>, Full Professor, UNICAEN, GREYC)</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>—<span class="Apple-tab-span" style="white-space:pre"> </span>Sébastien Bougleux (<a href="mailto:sebastien.bougleux@unicaen.fr" class="">sebastien.bougleux@unicaen.fr</a>, Associate Professor, UNICAEN, GREYC)</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>—<span class="Apple-tab-span" style="white-space:pre"> </span>Christophe Charrier (<a href="mailto:christophe.charrier@unicaen.fr" class="">christophe.charrier@unicaen.fr</a>, Associate Professor (Habilited), UNICAEN, GREYC)</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class="">References</b></div><div class="">--------------</div><div class="">[1]<span class="Apple-tab-span" style="white-space:pre"> </span>Amanda Cardoso Duarte, Francisco Roldan, Miquel Tubau, Janna Escur, Santiago Pascual, Amaia Salvador, Eva Mohedano, Kevin McGuinness, Jordi Torres, and Xavier Giró-i-Nieto, “Wav2pix : Speech-conditioned face generation using generative adversarial networks,” in IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2019, Brighton, United Kingdom, May 12-17, 2019. 2019, pp. 8633–8637, IEEE.</div><div class="">[2]<span class="Apple-tab-span" style="white-space:pre"> </span>Zheng Fang, Zhen Liu, Tingting Liu, Chih-Chieh Hung, Jiangjian Xiao, and Guangjin Feng, “Facial expression GAN for voice- driven face generation,” Vis. Comput., vol. 38, no. 3, pp. 1151–1164, 2022.</div><div class="">[3]<span class="Apple-tab-span" style="white-space:pre"> </span>Tae-Hyun Oh, Tali Dekel, Changil Kim, Inbar Mosseri, William T. Freeman, Michael Rubinstein, and Wojciech Matusik, “Speech2face : Learning the face behind a voice,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. 2019, pp. 7539–7548, Computer Vision Foundation / IEEE.</div><div class="">[4]<span class="Apple-tab-span" style="white-space:pre"> </span>Yandong Wen, Bhiksha Raj, and Rita Singh, “Face reconstruction from voice using generative adversarial networks,” in Advances in Neural Information Processing Systems 32 : Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, Eds., 2019, pp. 5266–5275.</div><div class=""><br class=""></div><div class="">
<hr class="">
<table width="500" cellspacing="0" cellpadding="0" class="">
<tbody class=""><tr class=""> <td style="vertical-align: top; text-align:left;color:#000000;font-size:12px;font-family:helvetica, arial;; text-align:left" class="">
<span class=""><b class=""><span style="color:#000000;font-size:15px;font-family:helvetica, arial" class="">Olivier LÉZORAY</span></b><br class=""> Full Professor of Computer Science</span> <br class=""><br class="">
<span style="color:#000000;font-size:15px;font-family:helvetica, arial" class=""><b class="">University of Caen Normandy</b>
<table class="">
<tbody class=""><tr style="font-size:8pt;font-family:helvetica, arial" class="">
<td class="">West Normandy Institute of Technology<br class="">Multimedia and Internet Department<br class="">F-50000 SAINT-LÔ
<a href="tel:+33 2 33 77 55 14" style="color:#3388cc;text-decoration:none" class="">+33(0)233775514</a>
</td>
<td class="">
GREYC UMR CNRS 6072<br class="">Image Team - ENSICAEN<br class="">6 Bd. Marechal Juin<br class="">F-14000 CAEN
<a href="tel:+33 2 31 45 29 27" style="color:#3388cc;text-decoration:none" class="">+33(0)231452927</a>
</td>
</tr>
</tbody></table>
</span>
<table cellpadding="0" border="0" class=""><tbody class=""><tr class=""><td style="padding-right:4px" class=""><a href="https://linkedin.com/in/olivier-lezoray-0983114/" style="display: inline-block" class=""><img width="30" height="30" src="https://s1g.s3.amazonaws.com/7583fe34c2ad59e0367b6f4773f07bf3.png" alt="LinkedIn" style="border:none" class=""></a></td><td style="padding-right:4px" class=""><a href="skype:olezoray" style="display: inline-block" class=""><img width="30" height="30" src="https://s1g.s3.amazonaws.com/7b0d8c63303d92a487c23d47895fec48.png" alt="Skype" style="border:none" class=""></a></td></tr></tbody></table><a href="https://lezoray.users.greyc.fr" style="text-decoration:none;color:#3388cc" class="">https://lezoray.users.greyc.fr</a><br class=""> </td> <td style="border-right:solid #000000 2px" width="12" class=""></td>
<td width="138" style="vertical-align:top;padding-left:10px" class=""><a style="display:inline-block" href="https://unicaen.fr" class=""><img style="border:none" width="138" src="https://s1g.s3.amazonaws.com/a85db239c732c19b92021c4f24668e70.png" class=""></a></td>
</tr>
</tbody></table>
</div>
<br class="">
<br>
<br></body></html>