[visionlist] Postdoctoral position in Audio analysis for Visual Attention modeling
philippe.guillotel at technicolor.com
Fri May 21 16:21:27 GMT 2010
Postdoctoral position in Audio analysis for Visual Attention modeling -
Technicolor Research & innovation in Rennes, France, proposes a Post-Doc
position in the area of Human Perception and Audio Processing. More
specifically, the main topic of this position deals with the
investigation of audio saliency and/or any audio cues in order to
complete the Technicolor visual attention model [1,2], i.e. including:
- A complete state-of-the-art and a relevant analysis of existing
techniques and implementation of audio-visual modeling of visual
- A concrete implementation of a solution within the existing model (C
programming): developing the detection of audio cues as well as their
fusion within other independent visual cues.
- The definition/set-up of a complete environment in order to run
experiments of visual attention (i.e. using eye-tracking apparatus)
added to disturbing sound environment...
The successful candidates must have a PhD (or soon), and specific
knowledge in Computer Science and Audio Processing. Ideally, additional
knowledge would be Human Perception and a background for experiments
setup/protocol. Since software developments is part of this job, strong
experience in programming (C, C++ on Windows/Linux) is required.
The position is located in Rennes, France (http://www.rennes.fr/).
Applicants should submit a curriculum vitae, recent list of
publications, a statement of research interests and samples of research
Resumes may be submitted electronically in either Word (.doc), Rich Text
(.rtf) or Portable Document Format (PDF). And send to
Philippe.guillotel at technicolor.com
 O. Le Meur, P. Le Callet, D. Barba, and D. Thoreau, "A coherent
computational approach to model the bottom-up visual attention," IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp.
 Le Meur, O., Le Callet, P., Barba, D. Predicting visual fixations on
video based on low-level visual features, Vision Research, vol. 47, no.
19, pp. 2483-2498, 2007.
More information about the visionlist