[visionlist] phd scholarship in Lancaster University, UK

jungong han jungonghan77 at gmail.com
Thu May 17 11:23:58 -05 2018

Multi-Modal Object Recognition and Scene Understanding Employing Machine
Learning Techniques

Lancaster University <https://www.findaphd.com/search/phd.aspx?IID=180>

School of Computing and Communications


United Kingdom <https://www.findaphd.com/search/phd.aspx?CID=GB>

*Project Description*

Recent advances in imaging, networking, data processing and storage
technology have resulted in an explosion in the use of multi-modal sensory
data in a variety of fields, including video surveillance, urban
monitoring, cultural heritage area protection and many others. The
integration of data, such as audio, text, image, video from multiple
channels can provide complementary information and therefore increase the
accuracy of the overall decision making process. Currently, seeking an
efficient way to analyse, mine and understand such large-scale, multimodal
and noisy data is a challenging and interesting research topic, where the
core problem is learning representations from the data.

The problem of learning representations from the data has received
considerable attention in machine learning. Deep learning approaches in
particular have achieved close to human accuracy at recognition tasks in
limited domains (e.g. ImageNet image recognition). However, these
approaches usually require vast, high-quality datasets in the training
phase in order to obtain a good performance, which makes their use
expensive and limits to domains where gathering large numbers of training
examples and correct labels are feasible. For the real-life computer vision
applications such as video surveillance and ambient assisted living, using
such approaches seems impractical. In contrast to deep neural networks,
humans can learn concepts from very few examples, and can generalize them
effortlessly across domains (unlike deep learning). Even 2-year olds can
learn new words and generalize them to new situations after seeing only a
few examples. The abilities of humans including integration information
acquired through different modalities, reasoning, planning, and problem
solving are highly challenging for current artificial intelligence models.

In this PhD work, we will formalize the ideas of representational
geometry/conceptual spaces in a tractable, deep probabilistic framework,
and use it to develop a new type of bio-inspired models capable of several
novel aspects of human-like learning, including learning 1) from few
examples, 2) from multiple modalities (visual, spatiotemporal, and text
data), 3) grounding concept representations in perceptions and action
possibilities. The targeted applications include video surveillance, robot
vision and smart environment for elderly assisted living.

The successful candidate is expected to work in an international research
group and will be jointly supervised by Dr. Jungong Han (
https://sites.google.com/site/jungonghan77/) and Dr. Zhijin Qin

*Eligibility requirement:*

* Academic excellence of the proposed student i.e. normally an Honours
Degree: 1st or 2:1 (or equivalent) or possession of a Master degree, with
merit (or equivalent study at postgraduate level).

* We expect experience in computer vision, video analysis, and machine
learning as well as good mathematical and programming skills (either C/C++
* Appropriate IELTS score (overall score of 6.5 with no component below
6.0), if required (evidence required by 1 August).

* Funding Notes*


   A full International fee waiver for 3 years

   An annual tax free stipend of £15,000

*Deadline for applications:* 30 June 2018
Interview date (if known): to be confirmed
Start Date: 1 Oct 2018

For further details of how to apply, entry requirements and the application
form, see


*Informal Enquiries:*
Applicant is encouraged to contact with supervisors Dr. Jungong Han (
jungong.han at lancaster.ac.uk) or Dr. Zhijin Qin (Zhijin.qin at lancaster.ac.uk)
before submitting the application.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20180517/3c9cf6ec/attachment.html>

More information about the visionlist mailing list