<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p><span
style="font-size:16.0pt;mso-bidi-font-size:12.0pt;mso-bidi-font-weight:
bold" lang="EN-US">Ph.D position in</span><i
style="mso-bidi-font-style:normal"><span
style="font-size:16.0pt; mso-bidi-font-size:12.0pt"
lang="EN-US"> Material classification based on visual
appearance</span></i><span
style="font-size:16.0pt;mso-bidi-font-size:12.0pt;mso-bidi-font-weight:
bold" lang="EN-US"></span> </p>
<p
style="margin-top:6.0pt;margin-right:0cm;margin-bottom:0cm;margin-left:0cm;margin-bottom:.0001pt;text-align:justify"><span
style="mso-bidi-font-weight: bold" lang="EN-US">The Image
Science and Computer Vision team of Hubert Curien laboratory
(<a class="moz-txt-link-freetext"
href="https://laboratoirehubertcurien.univ-st-etienne.fr/en/index.html">https://laboratoirehubertcurien.univ-st-etienne.fr/en/index.html</a>)
is looking for candidates for a Ph.D position on </span><i
style="mso-bidi-font-style: normal"><span lang="EN-US">Transfer
Learning for Material classification based on visual
appearance correspondences</span></i><span
style="mso-bidi-font-weight: bold" lang="EN-US">.</span></p>
<p
style="margin-top:6.0pt;margin-right:0cm;margin-bottom:0cm;margin-left:0cm;margin-bottom:.0001pt;text-align:justify"><span
style="mso-bidi-font-weight: bold" lang="EN-US">Image
classification has received a lot of interest in the last decade
and huge improvements have been observed in terms of
classification accuracy for the classical datasets such as
PASCAL VOC or ImageNet. Nevertheless, it appears that material
classification is still an open problem because of the high
variability of their appearance in images and because of the
lack of learning data. In order to cope with these problems,
recent papers resort to convolutional networks (</span><span
lang="EN-US"><a
href="https://arxiv.org/ftp/arxiv/papers/1710/1710.06854.pdf"><span
style="font-size:10.0pt;font-family:"Arial",sans-serif">https://arxiv.org/ftp/arxiv/papers/1710/1710.06854.pdf</span></a><span
style="mso-bidi-font-weight:bold">) in order to learn the
variability as well as transfer learning approaches in order
to be able to learn on different datasets and so increasing
the amount of learning data (</span><a
href="https://arxiv.org/pdf/1609.06188.pdf"><span
style="font-size:10.0pt;
font-family:"Arial",sans-serif">https://arxiv.org/pdf/1609.06188.pdf</span></a><span
style="mso-bidi-font-weight:bold">)</span></span><span
style="font-size:10.0pt;font-family:"Arial",sans-serif"
lang="EN-US">.</span><span style="mso-bidi-font-weight:bold"
lang="EN-US"></span></p>
<p
style="margin-top:6.0pt;margin-right:0cm;margin-bottom:0cm;margin-left:0cm;margin-bottom:.0001pt;text-align:justify"><span
style="mso-bidi-font-weight: bold" lang="EN-US">The aim of this
PhD project is to study the visual appearance of materials from
a computer vision perspective by combining computer vision
techniques with machine learning and data mining techniques.
More and more the design of new materials having specific visual
appearance properties passes through the use of computer based
approaches (see ref 3, 4 and 5).</span></p>
<p
style="margin-top:6.0pt;margin-right:0cm;margin-bottom:0cm;margin-left:0cm;margin-bottom:.0001pt;text-align:justify"><span
style="mso-bidi-font-weight: bold" lang="EN-US">The objective
will be:</span></p>
<p
style="margin-top:6.0pt;margin-right:0cm;margin-bottom:5.0pt;margin-left:35.7pt;text-align:justify;text-indent:-17.85pt;mso-list:l1
level1 lfo2"><span style="mso-bidi-font-weight:bold" lang="EN-US"><span
style="mso-list:Ignore">1.<span style="font:7.0pt "Times
New Roman""> </span></span></span><span dir="LTR"></span><span
style="mso-bidi-font-weight:bold" lang="EN-US">To study
different strategies to fuse/combine different datasets, to
enrich existing datasets using data augmentation methods (e.g.
light variations, scale, shadows, …), to transfer knowledge
learnt from one dataset to another one (e.g. see </span><span
lang="EN-US"><a href="https://arxiv.org/pdf/1609.06188.pdf">https://arxiv.org/pdf/1609.06188.pdf</a>)<span
style="mso-bidi-font-weight:bold">, to mind/infer knowledge
from data, etc.</span></span></p>
<p
style="margin-left:36.0pt;text-align:justify;text-indent:-18.0pt;mso-list:l1
level1 lfo2"><span style="mso-bidi-font-weight: bold" lang="EN-US"><span
style="mso-list:Ignore">2.<span style="font:7.0pt "Times
New Roman""> </span></span></span><span dir="LTR"></span><span
style="mso-bidi-font-weight:bold" lang="EN-US">To create a new
dataset of images of materials which could be complementary to
the existing synthetized and real-world ones: </span><span
lang="EN-US">Flickr Material Database (Sharan et al., 2010), the
ImageNet7 dataset (Hu et al., 2011), the MINC-2500 (Bell et al.,
2015), the University of Bonn synthetic dataset (Weinmann et
al., 2014), …<span style="mso-bidi-font-weight:bold"></span></span></p>
<p
style="margin-left:36.0pt;text-align:justify;text-indent:-18.0pt;mso-list:l1
level1 lfo2"><span style="mso-bidi-font-weight: bold" lang="EN-US"><span
style="mso-list:Ignore">3.<span style="font:7.0pt "Times
New Roman""> </span></span></span><span dir="LTR"></span><span
style="mso-bidi-font-weight:bold" lang="EN-US">To classify
images of materials according their visual appearance in order
to infer/learn new knowledge on material properties (for example
using auto-encoders, see <a
href="https://arxiv.org/pdf/1711.03678.pdf">https://arxiv.org/pdf/1711.03678.pdf</a>).
Several machine learning and data mining methods (e.g. CNN, deep
learning, will be investigated. </span></p>
<p
style="margin-top:5.0pt;margin-right:0cm;margin-bottom:6.0pt;margin-left:35.7pt;text-align:justify;text-indent:-17.85pt;mso-list:l1
level1 lfo2"><span style="mso-bidi-font-weight:bold" lang="EN-US"><span
style="mso-list:Ignore">4.<span style="font:7.0pt "Times
New Roman""> </span></span></span><span dir="LTR"></span><span
style="mso-bidi-font-weight:bold" lang="EN-US">To learn how to
characterize the visual appearance of some materials from a
limited set of features and of image acquisitions. The
auto-encoder could be a nice tool to access semantic features
and observe their impact on the reconstructed material images.
This could also help for material design.</span></p>
<p style="margin-top:6.0pt;text-align:justify"><span
style="mso-bidi-font-weight:bold" lang="EN-US">The thesis will
be co-supervised by Alain Trémeau (Full Professor, <a
href="https://perso.univ-st-etienne.fr/tremeaua/">https://perso.univ-st-etienne.fr/tremeaua/</a>)
and Damien Muselet (Assistant Professor,</span><span
lang="EN-US"> <span style="mso-bidi-font-weight:bold"><a
href="https://perso.univ-st-etienne.fr/muda8804/">https://perso.univ-st-etienne.fr/muda8804/</a>). </span></span></p>
<p style="text-align:justify"><b style="mso-bidi-font-weight:normal"><span
lang="EN-US">The deadline for applications is 06/05/2018.</span></b></p>
<p class="MsoNormal"><span style="font-size:6.0pt;line-height:107%;
font-family:"Times New Roman",serif" lang="EN-US"> </span></p>
<p class="MsoNormal"><b style="mso-bidi-font-weight:normal"><span
style="font-size:16.0pt;line-height:107%;font-family:"Times
New Roman",serif; mso-ansi-language:FR">Bibliography</span></b></p>
<p class="MsoListParagraphCxSpFirst"
style="margin-top:6.0pt;margin-right:0cm;
margin-bottom:6.0pt;margin-left:35.7pt;mso-add-space:auto;text-indent:-17.85pt;
mso-list:l0 level1 lfo1"><span class="MsoHyperlink"><span
style="color:#0563C1;mso-ansi-language:EN-US;text-decoration:none;
text-underline:none" lang="EN-US"><span
style="mso-list:Ignore">1.<span style="font:7.0pt
"Times New Roman""> </span></span></span></span><span
dir="LTR"></span><span style="mso-ansi-language:EN-US"
lang="EN-US">Sébastien Lagarde, “Open Problems in Real-Time
Rendering-Physically-Based Materials: Where Are We?” in ACM
SIGGRAPH 2017, </span><a
href="HTTP://openproblems.realtimerendering.com/s2017/02-PhysicallyBasedMaterialWhereAreWe.pdf"><span
style="mso-ansi-language:EN-US" lang="EN-US">http://openproblems.realtimerendering.com/s2017/02-PhysicallyBasedMaterialWhereAreWe.pdf</span></a><span
class="MsoHyperlink"><span
style="color:#0563C1;mso-ansi-language: EN-US" lang="EN-US"></span></span></p>
<p class="MsoListParagraphCxSpMiddle"
style="margin-top:6.0pt;margin-right:0cm;
margin-bottom:6.0pt;margin-left:35.7pt;mso-add-space:auto;text-indent:-17.85pt;
mso-list:l0 level1 lfo1"><span
style="color:#0563C1;mso-ansi-language:EN-US" lang="EN-US"><span
style="mso-list:Ignore">2.<span style="font:7.0pt "Times
New Roman""> </span></span></span><span dir="LTR"></span><span
style="letter-spacing:.1pt;mso-ansi-language: EN-US"
lang="EN-US">(2018) </span><span
style="mso-ansi-language:EN-US" lang="EN-US">G. Kalliatakis, A.
Sticlaru, G. Stamatiadis, S. Ehsan, A. Leonardis, J. Gall and K.
D. McDonald-Maier, Material Classification in the Wild: Do
Synthesized Training Data Generalise Better than Real-world
Training Data?<span style="mso-spacerun:yes"> </span>Proceedings
of VISAPP’2018.<u><span style="color:#0563C1"></span></u></span></p>
<p class="MsoListParagraphCxSpMiddle"
style="margin-top:6.0pt;margin-right:0cm;
margin-bottom:6.0pt;margin-left:35.7pt;mso-add-space:auto;text-indent:-17.85pt;
mso-list:l0 level1 lfo1"><span class="LienInternet"><span
style="mso-ansi-language:EN-US;text-decoration:none;text-underline:
none" lang="EN-US"><span style="mso-list:Ignore">3.<span
style="font:7.0pt "Times New Roman""> </span></span></span></span><span
dir="LTR"></span><span
style="letter-spacing:.1pt;mso-ansi-language:EN-US" lang="EN-US">(2018)
Reviewing the Novel Machine Learning Tools for Materials Design.
</span><a
href="https://link.springer.com/chapter/10.1007/978-3-319-67459-9_7"><span
style="mso-ansi-language:EN-US" lang="EN-US">https://link.springer.com/chapter/10.1007/978-3-319-67459-9_7</span></a><span
class="LienInternet"><span style="mso-ansi-language:EN-US"
lang="EN-US">; </span></span></p>
<p class="MsoListParagraphCxSpMiddle"
style="margin-top:6.0pt;margin-right:0cm;
margin-bottom:6.0pt;margin-left:35.7pt;mso-add-space:auto;text-indent:-17.85pt;
mso-list:l0 level1 lfo1"><span class="LienInternet"><span
style="mso-ansi-language:EN-US;text-decoration:none;text-underline:
none" lang="EN-US"><span style="mso-list:Ignore">4.<span
style="font:7.0pt "Times New Roman""> </span></span></span></span><span
dir="LTR"></span><span
style="letter-spacing:.1pt;mso-ansi-language:EN-US" lang="EN-US">(2017)
</span><span style="mso-ansi-language:EN-US" lang="EN-US">Data
mining-aided materials discovery and optimization,<span
style="mso-spacerun:yes"> </span></span><a
href="http://www.sciencedirect.com/science/article/pii/S2352847817300618"><span
class="LienInternet"><span
style="color:blue;mso-ansi-language:EN-US" lang="EN-US">http://www.sciencedirect.com/science/article/pii/S2352847817300618</span></span></a><span
class="LienInternet"><span style="mso-ansi-language:EN-US"
lang="EN-US">; </span></span></p>
<p class="MsoListParagraphCxSpMiddle"
style="margin-top:6.0pt;margin-right:0cm;
margin-bottom:6.0pt;margin-left:35.7pt;mso-add-space:auto;text-indent:-17.85pt;
mso-list:l0 level1 lfo1"><span class="LienInternet"><span
style="mso-ansi-language:EN-US;text-decoration:none;text-underline:
none" lang="EN-US"><span style="mso-list:Ignore">5.<span
style="font:7.0pt "Times New Roman""> </span></span></span></span><span
dir="LTR"></span><span class="text"><span
style="mso-ansi-language:EN-US" lang="EN-US">(2017) </span></span><span
class="author-ref"><sup><span style="mso-ansi-language:EN-US"
lang="EN-US"><span style="mso-spacerun:yes"> </span></span></sup></span><span
style="mso-ansi-language:EN-US" lang="EN-US">Materials discovery
and design using machine learning, </span><a
href="http://www.sciencedirect.com/science/article/pii/S2352847817300515"><span
style="mso-ansi-language:EN-US" lang="EN-US">http://www.sciencedirect.com/science/article/pii/S2352847817300515</span></a><span
class="LienInternet"><span style="mso-ansi-language:EN-US"
lang="EN-US">; </span></span></p>
<p class="MsoListParagraphCxSpLast"
style="margin-top:6.0pt;margin-right:0cm;
margin-bottom:6.0pt;margin-left:35.7pt;mso-add-space:auto;text-indent:-17.85pt;
mso-list:l0 level1 lfo1"><span
style="color:#0563C1;mso-ansi-language:EN-US" lang="EN-US"><span
style="mso-list:Ignore">6.<span style="font:7.0pt "Times
New Roman""> </span></span></span><span dir="LTR"></span><span
style="mso-ansi-language:EN-US;mso-fareast-language: ZH-TW"
lang="EN-US">(2016) An intuitive control space for material
appearanc<span style="mso-bidi-font-weight:bold">e<b>, </b></span></span><a
href="https://dl.acm.org/citation.cfm?id=2980242"><span
style="mso-ansi-language:EN-US" lang="EN-US">https://dl.acm.org/citation.cfm?id=2980242</span></a><u><span
style="color:#0563C1;mso-ansi-language:EN-US" lang="EN-US"></span></u></p>
<h2 style="margin-bottom:6.0pt"><span style="font-size:16.0pt"
lang="EN-US">Requested skills</span></h2>
<p style="margin-top:6.0pt;text-align:justify"><span
style="mso-bidi-font-weight:bold" lang="EN-US">The desired
profile is Master (MSc or equivalent) or Engineer degree in
Machine Learning and Data Mining / Image Processing and Computer
Vision / Computer Science and Applied Mathematics, with
excellent academic record and research experience, in-depth
knowledge of machine learning (Computational Neural Networks,
Deep Learning), data mining (Transfer Knowledge), optimization
methods, with a specialization in one of the following areas:
machine learning, data mining or computer vision.</span></p>
<p style="margin-top:6.0pt;text-align:justify"><span lang="EN-US">We
are looking for a curious student with excellent programming
skills (e.g., in Matlab, Python, or C/C++).<span
style="mso-bidi-font-weight:bold"></span></span></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;margin-bottom:6.0pt;
line-height:normal;mso-outline-level:2"><b><span style="font-size:
16.0pt;font-family:"Times New
Roman",serif;mso-fareast-font-family:"Times New
Roman"" lang="EN-US">Application</span></b></p>
<p style="margin-top:6.0pt"><span lang="EN-US">Interested candidates
should send a resume, a cover letter, and transcripts of BSc and
MSc (M1 and M2 years). Recommendation letters will be
appreciated.</span></p>
<p style="margin-top:6.0pt"><span lang="EN-US">All applications must
be sent electronically to Alain Trémeau (<a
href="mailto:alain.tremeau@univ-st-etienne.fr">alain.tremeau@univ-st-etienne.fr</a>)
and Damien Muselet (<a
href="mailto:damien.muselet@univ-st-etienne.fr">damien.muselet@univ-st-etienne.fr</a>)
</span></p>
<h2 style="margin-bottom:6.0pt"><span style="font-size:16.0pt"
lang="EN-US">Contract</span></h2>
<p style="margin-top:6.0pt"><span lang="EN-US">3-years contract on
the basis of a monthly gross income of 1 760 euros
approximatively. Part-time teaching can be considered. Start in
autumn 2018.</span></p>
<div class="moz-signature">-- <br>
<meta charset="UTF-8">
<br>
<br>
<table style="background-color: #FFFFFF; border-collapse:
collapse;">
<tbody>
<tr style="height: 120px">
<td style="vertical-align: top; padding:0px; padding-right:
10px; border-right: 1px solid #E9540D"><a
href="http://laboratoirehubertcurien.fr"><img
src="cid:part15.90DC1639.F218C07F@univ-st-etienne.fr"></a></td>
<td style="vertical-align: top; padding:0px; padding-top:
3px; padding-left: 10px; padding-right: 10px;
border-right: 1px solid #E9540D;">
<table style="font: 10pt 'Tahoma', sans-serif;">
<tbody>
<tr>
<td style="font-weight: 700; color:#000000;
font-size: 10pt;">Alain <span
style="text-transform:uppercase;">Tremeau</span></td>
</tr>
<tr>
<td style="color:#E9540D">Professor</td>
</tr>
<tr>
<td style="color:#E9540D">Academic Coordinator of
Masters COSI/CIMET and 3DMT,
<a class="moz-txt-link-freetext" href="http://www.master-colorscience.eu/">http://www.master-colorscience.eu/</a>,
<a class="moz-txt-link-freetext" href="http://master-3dmt.eu/">http://master-3dmt.eu/</a></td>
</tr>
<tr>
<td style="color:#E9540D">Visit my homepage:
<a class="moz-txt-link-freetext" href="http://perso.univ-st-etienne.fr/tremeaua/">http://perso.univ-st-etienne.fr/tremeaua/</a></td>
</tr>
<tr>
<td style="color:#E9540D"><a style="text-decoration:
inherit; color: inherit;"
href="mailto:alain.tremeau@univ-st-etienne.fr">alain.tremeau@univ-st-etienne.fr</a></td>
</tr>
<tr>
<td style="color:#E9540D">Tél. : 04 77 91 57 52</td>
</tr>
</tbody>
</table>
</td>
<td style="vertical-align: top; padding:0px; padding-top:
3px; padding-left: 10px; padding-right: 10px;
border-right: 1px solid #E9540D;">
<table style="font: 10pt 'Tahoma', sans-serif;">
<tbody>
<tr>
<td style="font-weight: 700; color:#000000;
font-size: 10pt;">Laboratoire Hubert Curien</td>
</tr>
<tr>
<td style="color:#E9540D">Image Science &
Computer Vision Group</td>
</tr>
<tr>
<td style="color:#E9540D">Campus Manufacture</td>
</tr>
<tr>
<td style="color:#E9540D">23 RUE Dr Paul Michelon</td>
</tr>
<tr>
<td style="color:#E9540D">42023 SAINT-ETIENNE CEDEX
2</td>
</tr>
<tr>
<td style="color:#E9540D">04 77 91 57 80</td>
</tr>
<tr>
<td style="color:#E9540D"><a style="text-decoration:
inherit; color: inherit;"
href="http://laboratoirehubertcurien.fr">http://laboratoirehubertcurien.fr</a></td>
</tr>
</tbody>
</table>
</td>
<td style="vertical-align: top; padding:0px; padding-left:
10px; padding-right: 10px; border-right: 1px solid
#E9540D;">
<table>
<tbody>
<tr>
<td colspan="5"><a
href="http://www.univ-st-etienne.fr"><img
src="cid:part19.0E5941FA.DBD2AB12@univ-st-etienne.fr"></a></td>
</tr>
<tr>
</tr>
<tr>
<td style="margin-right: 4px;"><a
style="border-style: none;"
href="http://www.facebook.com/Universite.Jean.Monnet.Saint.Etienne"><img
src="cid:part21.7E76A31C.3B4D4F14@univ-st-etienne.fr"></a></td>
<td style="margin-right: 4px;"><a
style="border-style: none;"
href="https://twitter.com/Univ_St_Etienne"><img
src="cid:part23.B45C7A88.0A99C315@univ-st-etienne.fr"></a></td>
<td style="margin-right: 4px;"><a
style="border-style: none;"
href="https://www.youtube.com/user/UniJeanMonnetUJM"><img
src="cid:part25.431C42E5.D632E5EB@univ-st-etienne.fr"></a></td>
<td style="margin-right: 4px;"><a
style="border-style: none;"
href="https://www.linkedin.com/edu/universit%C3%A9-jean-monnet-saint-etienne-12533"><img
src="cid:part27.B8D3F196.26BE4E33@univ-st-etienne.fr"></a></td>
<td style="margin-right: 4px;"><a
style="border-style: none; text-decoration:
none;"
href="http://www.viadeo.com/fr/company/universite-jean-monnet"><img
src="cid:part29.630A82E8.5953930E@univ-st-etienne.fr"></a></td>
</tr>
</tbody>
</table>
</td>
<td style="vertical-align: top; padding:0px; padding-top:
3px; padding-left: 10px; padding-right: 10px;"> <a
href="http://www.universite-lyon.fr"><img
src="cid:part31.BB6A7900.F86C1153@univ-st-etienne.fr"></a>
</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>