[visionlist] Research Assistant Position in Computational Vision at American University in DC

bei.xiao at gmail.com bei.xiao at gmail.com
Wed May 24 20:51:49 -04 2023


The Xiao Computational Perception Lab
<https://sites.google.com/site/beixiao/> in the Department of Computer
Science at American University in DC is seeking a full-time Research
Assistant (RA) / Lab Programmer for an NIH-funded project on the
computational modeling of human material perception.



Job Description

The RA is to pursue research projects of their own as well as provide
support for research carried out in the Xiao lab. Possible duties include:

   -

   Building VR/AR experimental interfaces with Unity3D
   -

   Python coding for behavioral data analysis
   -

   Collecting data for multi-modal psychophysical experiments
   -

   Training machine learning models


This is an ideal position for someone interested in gaining research
experience in perception science and computational modeling before applying
to graduate school. The position comes with a salary and full benefits.
This position is initially for a one-year and extendable up to 3 years.
Starting date is September 1st, 2023, or soon after.

Position Requirements:

   -

   The ideal candidate should have a bachelor's degree in computer science,
   engineering, neuroscience, cognitive science, or a related field.
   -

   The candidate should have strong programming skills in Python and is
   familiar with Numpy, Pandas, Sklearn. Having experience with PyTorch is a
   plus.
   -

   Experience with statistical methods (linear models, multivariate
   analysis, etc.).
   -

   Prior experience with visual psychophysics would be useful.


The Lab and Facility

Xiao Lab studies both human and computer vision with an emphasis on
material perception and recognition. The lab currently has a few ongoing
research projects:



   -

   Learning latent representations of human perception of material
   properties using deep learning and psychophysics
   -

   Material perception in infants and children (collaborating with Dr.Laurie
   Bayet <https://www.american.edu/profiles/faculty/bayet.cfm>).
   -

   Uncertainty estimation in few-shot learning in text classification


The Xiao Lab is located in a state-of-the-art technology building, which is
home to computer science, physics, applied math, and a design and build
lab. The lab has high-performing GPU workstations, haptic phantom devices,
VR headsets, and 3D printers.


Washington, DC has a vibrant scene of computational cognition and computer
vision research (e.g., NIH, NIST, Johns Hopkins University, Georgetown
University, and the University of Maryland).


How to apply

Please submit your application, including a CV,  and a cover letter
describing your background, computational skills, experience, and
motivation - preferably in PDF format, and the names of two references that
have agreed to be contacted. Please submit the application no later than July
20th, 2023, to Prof. Bei Xiao at bxiao at american.edu.

Representative Recent Publications:

1. Liao, C, Sawayama, M, Xiao, B.  (2023) Unsupervised learning reveals
interpretable latent representations for translucency perception. PLOS
Computational Biology. Feb 8, 2023. PDF.
<https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010878>


2. Liao, C, Sawayama, M, Xiao, B.  (2022) Crystal or Jelly? Effect of Color
on the Perception of Translucent Materials with Photographs of Real-world
Objects. Journal of Vision. PDF
<https://jov.arvojournals.org/Article.aspx?articleid=2778489>.

3. He, J. Zhang, X., Shuo L. Wang, S, Huang, Q., Lu, C-T, Xiao, B. (2022)
Semantic Editing On Segmentation Map Via Multi-Expansion Loss. Neurocomputing.
501,306-317. PDF. <https://arxiv.org/abs/2010.08128>


-- 
Bei Xiao, PhD
Associate Professor
Computer Science & Center for Behavioral Neuroscience
American University, Washington DC

Homepage: https://sites.google.com/site/beixiao/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20230524/6f390994/attachment-0001.html>


More information about the visionlist mailing list