<div dir="ltr"><p class="MsoNormal" style="text-align:left;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><b><span style="font-size:14pt;font-family:Calibri,sans-serif">Postdoctoral Fellowship Opportunities in Vision Science at UC
Berkeley</span></b></p>
<p class="MsoNormal" style="text-align:center;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif;color:black"></span><span style="font-family:Calibri,sans-serif"></span></p>
<p class="MsoNormal" style="text-align:left;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><b><i style="font-size:12pt;text-align:center"><span style="font-family:Calibri,sans-serif">Interested in
high-resolution imaging, eye tracking and devising/applying innovative new
approaches for objective and subjective measures of retinal structure and
function?</span></i></b><i style="font-size:12pt;text-align:center"><span style="font-family:Calibri,sans-serif"><b> </b></span></i></p>
<p class="MsoNormal" style="text-align:left;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><i><span style="font-family:Calibri,sans-serif"><b>Curious about human color and
spatial vision?</b></span></i><i style="font-size:12pt;text-align:center"><span style="font-family:Calibri,sans-serif"> </span></i></p>
<p class="MsoNormal" style="text-align:left;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif">If yes, then consider joining our team at </span><span style="color:black"><a href="https://www.berkeley.edu/" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">UC Berkeley</span></a></span><span style="font-family:Calibri,sans-serif">. We have immediate
openings for one or more postdocs. Examples of two specific research areas are
listed below.</span></p><p class="MsoNormal" style="text-align:left;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif"><br></span></p>
<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><b><i><span style="font-family:Calibri,sans-serif">Optoretinography (ORG):</span></i></b><span style="font-family:Calibri,sans-serif"> This is an
emerging field in which all-optical methods are used to make non-invasive,
objective measures of retinal function. In Austin Roorda’s lab we have a </span><span style="color:black"><a href="https://opg.optica.org/boe/fulltext.cfm?uri=boe-13-11-5909&id=510143" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">novel approach</span></a></span><span style="font-family:Calibri,sans-serif"> for ORG whereby we
image and track the retinal with an Adaptive Optics Scanning Light Ophthalmoscope
(AOSLO), then use the eye tracking in real time to actively guide and steer an
OCT-based ORG probe on a targeted retinal region. The system is technically complex
but has the advantage that incessant eye motion is corrected from the start,
relieving the need for extensive post-processing of the data, and enabling
faster and more sensitive ORG measures from smaller retinal regions. Currently
we are using the system to classify the three types of cone, but our ultimate
goal is to record from inner retina, with the potential to directly measure
photoreceptor-ganglion cell connections in a living human retina. Postdocs on
this project will need basic knowledge of OCT and experience designing and
building advanced ophthalmic instrumentation.</span></p><p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif;font-size:12pt;text-align:center"> </span></p>
<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><b><i><span style="font-family:Calibri,sans-serif">Adaptive Optics Visual Psychophysics:</span></i></b><span style="font-family:Calibri,sans-serif"> This is a
collaborative, DoD-funded project between </span><span style="color:black"><a href="https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">Ren Ng</span></a></span><span style="font-family:Calibri,sans-serif"> (EECS), </span><span style="color:black"><a href="http://roorda.vision.berkeley.edu/" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">Austin
Roorda</span></a></span><span style="font-family:Calibri,sans-serif"> (Vision Science), </span><span style="color:black"><a href="https://optometry.berkeley.edu/people/william-tuten-phd/" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">Will Tuten</span></a></span><span style="font-family:Calibri,sans-serif"> (Vision Science) and
others to develop and use adaptive optics scanning light displays (AOSLD) to
study properties of human spatial and color vision on a cellular scale. Our
AOSLD effectively bypasses the first stages of vision (image formation, eye
movements) and is capable of tracking and delivering microdoses of light to </span><span style="color:black"><a href="https://www.science.org/doi/10.1126/sciadv.1600797" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">individual cones</span></a></span><span style="font-family:Calibri,sans-serif"> and, more recently, up to
thousands of cones in a living human eye at video rates. This platform offers
unique and unprecedented ways to study human spatial and color vision with the
possibility to extend color experience beyond the traditional human gamut. With
a multidisciplinary collaboration between multiple labs at Berkeley and our
collaborating institutions we hope to expand the capabilities of these
displays. This challenge requires major efforts in optical design and
engineering (multifocal, multiwavelength adaptive optics) and computation
(real-time tracking of lateral and torsional eye movements, cone-by-cone video
rendering). Postdocs on this project will need experience in optical design and
system construction, ideally with experience in adaptive optics.</span></p>
<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif"> </span><span style="font-family:Calibri,sans-serif;font-size:12pt"> </span></p>
<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif">If you are interested in joining our team, please send your CV
and whatever other materials you feel appropriate to </span><span style="color:black"><a href="mailto:aroorda@berkeley.edu" target="_blank" style="color:rgb(5,99,193)"><span style="font-family:Calibri,sans-serif">aroorda@berkeley.edu</span></a></span><span style="font-family:Calibri,sans-serif">.</span></p>
<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif"> </span></p>
<p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:Arial,sans-serif"><span style="font-family:Calibri,sans-serif">P.S. Don't hesitate to ask us about our track record of alumni
from our labs landing excellent jobs in academia and industry.</span><span style="font-family:Calibri,sans-serif"></span></p><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><font color="#888888">_______________________________</font><br><font color="#000000">Austin Roorda, PhD, <br>
Professor of Optometry and Vision Science</font></div><div dir="ltr"><font color="#000000">Herbert Wertheim School of Optometry and Vision Science<br>UC Berkeley School of Optometry<br>Berkeley, CA 94720-2020<br>labpage: <a href="https://vision.berkeley.edu/roordalab" target="_blank">https://vision.berkeley.edu/roordalab</a><br>
VS graduate program: <a href="https://vision.berkeley.edu" target="_blank">https://vision.berkeley.edu</a></font></div><div><font color="#000000">Center for Innovation in Vision and Optics <a href="http://goog_634377911" target="_blank">https://</a><a href="https://civo.berkeley.edu" target="_blank">civo.berkeley.edu </a></font></div><div><span style="color:rgb(136,136,136)">_________________________</span></div></div></div></div></div>