<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none"><!--P{margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr" style="font-size:12pt;color:#000000;background-color:#FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;text-align:center" id="m_-5409722656072389588gmail-docs-internal-guid-56abba12-7fff-216a-b7a5-48d7e04fbdf6">
<span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Call for Papers/Participation: SKELNETON
 Challenge</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;text-align:center">
<span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Deep Learning for Geometric Shape Understanding
 Workshop</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;text-align:center">
<span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">in conjunction with CVPR 2019</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;text-align:center">
<span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">June 17, 2019</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;text-align:center">
<span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Long Beach, CA</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;text-align:center">
<a href="http://ubee.enseeiht.fr/skelneton/" style="text-decoration:none" target="_blank" data-saferedirecturl="https://www.google.com/url?q=http://ubee.enseeiht.fr/skelneton/&source=gmail&ust=1550861774889000&usg=AFQjCNFhncyLjw2hyZlwvE-okE-j2f_DuQ"><span style="font-size:11pt;font-family:Arial;color:rgb(17,85,204);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">http://ubee.enseeiht.fr/<wbr>skelneton/</span></a></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Computer
 vision approaches have made tremendous efforts toward understanding shape from various data formats, especially since entering the deep learning era. Although accurate results have been obtained in detection, recognition, and segmentation, there is less attention
 and research on extracting topological and geometric information from shapes. These geometric representations provide compact and intuitive abstractions for modeling, synthesis, compression, matching, and analysis. Extracting such representations is significantly
 different from segmentation and recognition tasks, as they contain both local and global information about the shape.</span></p>
<br>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">This
 workshop aims to bring together researchers from computer vision, computer graphics, and mathematics to advance the state of the art in topological and geometric shape analysis using deep learning.</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">***
 Competition: </span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The SkelNetOn Challenge
 is structured around shape understanding in three domains. We provide shape datasets and some complementary resources (e.g, pre/post-processing, sampling, and data augmentation scripts) and the testing platform. The winner of each track will receive a Titan
 RTX GPU, sponsored by NVIDIA.</span></p>
<br>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Submissions
 to the challenge will perform one of the following tasks:</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">ˇ
      </span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:italic;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Shape pixels to skeleton pixels</span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">:
 Extract skeleton pixels from a binary shape image. This is a binary classification problem where image pixels are labeled as on or off the skeleton.</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">ˇ
      </span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:italic;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Shape points to skeleton points:
</span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Extract skeleton points from a shape
 point cloud. This may be treated as a binary classification problem where points are labeled as on or off the skeleton, though other formulations (e.g., transformer networks) are also acceptable.</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">ˇ
      </span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:italic;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Shape pixels to parametric curves:
</span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Extract a parametric representation
 of a network of curves in the skeleton and their radii, modeled as a degree-5 Bézier curve in three dimensions (two spatial coordinates and the radius). This may be thought of as a regression problem.</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">***
 Call for papers</span><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">: We will have an
 open submission format where i) participants in the competition will be required to submit a paper, or ii) researchers can share their novel unpublished research in deep learning for geometric shape understanding. The top submissions in each category will
 be invited to give presentations during the workshop and will be published in the workshop proceedings.</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Although
 we encourage all submissions to benchmark their results on the evaluation platform, there are other relevant research areas that our datasets do not address. For those areas, the scope of the submissions may include but is not limited to the following general
 topics:  </span></p>
<ul style="margin-top:0pt;margin-bottom:0pt">
<li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Boundary
 extraction from 2D/3D shapes</span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Geometric
 deep learning on 3D and higher dimensions </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Generative
 methods for parametric representations </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Novel
 shape descriptors and embeddings for geometric deep learning </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Deep
 learning on non-Euclidean geometries </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Transformation
 invariant shape abstractions </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Shape
 abstraction in different domains </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Synthetic
 data generation for data augmentation in geometric deep learning </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Comparison
 of shape representations for efficient deep learning </span></p>
</li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(98,98,98);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Applications
 of geometric deep learning in different domains</span></p>
</li></ul>
<br>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The
 CMT site for paper submissions is </span><a href="https://cmt3.research.microsoft.com/SKELNETON2019/" style="text-decoration:none" target="_blank" data-saferedirecturl="https://www.google.com/url?q=https://cmt3.research.microsoft.com/SKELNETON2019/&source=gmail&ust=1550861774889000&usg=AFQjCNFeGLPRhGaTVYbVuTYnnpSmFbbOyw"><span style="font-size:11pt;font-family:Arial;color:rgb(17,85,204);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">https://cmt3.research.<wbr>microsoft.com/SKELNETON2019/</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">.
 Each submitted paper must be no longer than 4 pages excluding references. Please refer to the CVPR author submission guidelines for instructions at
</span><a href="http://cvpr2019.thecvf.com/submission/main_conference/author_guidelines" style="text-decoration:none" target="_blank" data-saferedirecturl="https://www.google.com/url?q=http://cvpr2019.thecvf.com/submission/main_conference/author_guidelines&source=gmail&ust=1550861774889000&usg=AFQjCNGIxoAICrOUoXsuKQnziHz6D-zm6g"><span style="font-size:11pt;font-family:Arial;color:rgb(17,85,204);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">http://cvpr2019.thecvf.com/<wbr>submission/main_conference/<wbr>author_guidelines</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">.
 The review process will be double blind but the papers will be linked to any associated challenge submissions.
</span><span style="font-size:11pt;font-family:Arial;color:rgb(34,34,34);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Selected papers will be published
 in IEEE CVPRW proceedings, visible in IEEE Xplore and on the CVF Website.</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">***
 Important Dates:</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Feb
 15: Call for Challenge/Call for Papers</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Mar
 25: Submissions close</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Apr
 5: Notification to authors</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Apr
 10: Camera-ready paper</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Jun
 17: Workshop</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"> </span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(42,42,42);background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">***
 Organizing Committee and Contact:</span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Ilke
 Demir, DeepScale, <a href="mailto:idemir@purdue.edu" target="_blank">idemir@purdue.edu</a></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Kathryn
 Leonard, Occidental College, <a href="mailto:kleonardci@gmail.com" target="_blank">
kleonardci@gmail.com</a></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Géraldine
 Morin, Univ. of Toulouse, <a href="mailto:morin@n7.fr" target="_blank">morin@n7.fr</a></span></p>
<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Camila
 Hahn, Bergische Universitat Wuppertal, <a href="mailto:chahn@uni-wuppertal.de" target="_blank">
chahn@uni-wuppertal.de</a></span></p>
<p><br>
</p>
</body>
</html>