[visionlist] [Jobs] Fully funded PhD position on multimodal perception for UAV localization at ICB UMR CNRS laboratory (France) and JRL UMI CNRS/AIST (Japan) -- DL 24th of May 2024
Carlos Mateo Agullo
carlos-manuel.mateo-agullo at u-bourgogne.fr
Fri Apr 26 10:31:21 -04 2024
PhD position: Dynamic robot localization through event-based vision and
3D point cloud blending
#Location: 18 months in Tsukuba, Japon and 18 months in Dijon, France
#Host institutions: Institut Pascal
#Starting Date: October 1s, 2024
#Funding Duration: 36 months (3 years)
#Supervisors: Dr. HDR. Guillaume Caron, Dr. Carlos Mateo and Prof.
Cédric Demonceaux.
#Application Deadline: May 24th, 2024
#Context:
The VAI team of ICB laboratory at the UB in Dijon, France, is a partner
of the ANR PRCI EVELOC project. EVELOC focuses on providing new 3D
event-based visual localization methods in the field of robotics.
Event-based cameras are a novel bio-inspired type of sensor with low
latency and wide dynamic range. These sensors asynchronously activate
individual pixels to detect changes in light intensity, similar to
natural visual sensors. One of VAI’s most recent works in this area is
dedicated to object detection for autonomous vehicles [1].
The Joint Robotics Laboratory (JRL) at AIST in Tsukuba, Japan, is the
other partner of EVELOC. JRL is focused on 3D point cloud-based object
detection for robots. Especially, JRL is working on object pose tracking
tasks in dynamical conditions by directly aligning dense 3D point clouds
reconstructed from images of intensity variations calculated from events
[2]. Developed methods are aimed at manipulation tasks using humanoid
robots [3].
This thesis will leverage large-scale 3D environment point clouds [4] to
propose new efficient and robust algorithms for directly aligning events
captured by a moving camera on these 3D point clouds. It will take place
over 18 months in Japan, funded by AIST, then 18 months in France,
funded by ANR.
The student will implement and evaluate their research work first on
event-based cameras (Prophesee Gen 3 and 4) mounted on quadruped robots
(Unitree Alien Go) and bipedal robots (HRP-5P) provided by the
supervising team. Obtained results will be published in top-tier venues
(T-RO, IJCV, ICRA, IROS, ICCV, etc).
#Objective:
The goal is to localize a drone equipped with an event camera in an
environment described by a 3D point cloud. This topic is part of the ANR
PRCI EVELOC project, which focuses on the visual localization of robots
equipped with event cameras. It also contributes to strength the
collaboration between the ICB UMR CNRS laboratory (France) and the JRL
UMI CNRS/AIST (Japan).
#Research:
We are looking for one highly motivated PhD student to study multimodal
robot localization on high dynamic scenarios
The PhD student will focus on three open problems:
1. Event-based camera calibration;
2. Align 3D point cloud and event-based images;
3. Localize a drone in high dynamic scenarios
The recruited person will have access to all the necessary equipment to
implement their scientific contributions (computer, computing center,
cameras, etc.). They will start their thesis in Japan and continue after
18 months in France. The thesis is funded by the ANR EVELOC project
(50%) and Japanese funding (50%).
# References
[1] Z. Zhou, Z. Wu, R. Boutteau, F. Yang, C. Demonceaux and D. Ginhac.
RGB-Event Fusion for Moving Object Detection in Autonomous Driving. IEEE
International Conference on Robotics and Automation, 2023, London,
United Kingdom, pp. 7808-7815.
[2] Y. Kang, G. Caron, et al.. Direct 3D model-based object tracking
with event camera by motion interpolation. IEEE Int. Conference on
Robotics and Automation, May 2024, Yokohama, Japan.
[3] K. Chappellet, M. Murooka, G. Caron, F. Kanehiro, A. Kheddar.
Humanoid Loco-Manipulations using Combined Fast Dense 3D Tracking and
SLAM with Wide-Angle Depth-Images. IEEE Transactions on Automation
Science and Engineering, 2023.
[4] I. Ben Salah, S. Kramm, C. Demonceaux, P. Vasseur. Summarizing large
scale3D mesh for urban navigation. Robotics and Autonomous Systems, vol.
152, art. num. 104037, 2022.
#The ideal candidate must have:
1/ a master's degree or equivalent in computer science or another
relevant field,
2/ an excellent academic record,
3/ strong experience in robotics and/or computer vision,
4/ excellent skills in mathematics and coding (C/C++, Matlab, ROS, Python),
5/ excellent written and oral communication skills in English,
6/ enthusiasm for research, teamwork spirit, and the ability to solve
problems independently.
#Application:
Applicants must submit
1/ a one-page cover letter,
2/ curriculum vitae with publications list and contacts of 2 references,
3/ a copy of academic transcripts (bachelor/master grades),
4/ availability (the earliest feasible starting date).
Applicants must be prepared to provide two reference letters upon request.
Once we receive your application and it fits well for the position, you
will be contacted within two weeks.
Applications should be sent, *in a single PDF document*, with the email
subject [PhD application] to:
cedric.demonceaux at u-bourgogne.fr; guillaume.caron at u-picardie.fr;
carlos-manuel.mateo-agullo at u-bourgogne.fr
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20240426/bbcae92b/attachment-0001.html>
More information about the visionlist
mailing list