[visionlist] CfP: ICCV 2019 Workshop on Multi-modal Video Analysis and Moments in Time Challenge

Hilde Kuehne kuehne at ibm.com
Fri Jul 12 11:04:21 -04 2019



[Apologies for cross-postings]

*******************************************************
1st CALL FOR PAPERS

Workshop on Multi-modal Video Analysis and Moments in Time Challenge
Nov. 2, 2019 | Seoul, Korea, in conjunction with ICCV 2019

https://sites.google.com/view/multimodalvideo/home

July 31, 2019 : Paper submission deadline
August 14, 2019 : Notification to authors
August 20, 2019: Camera-ready paper deadline

Video understanding is a very active research area in the computer vision
community. This workshop aims to particularly focus on modeling,
understanding, and leveraging the multi-modal nature of video. Recent
research has amply demonstrated that in many scenarios multimodal video
analysis is much richer than analysis based on any single modality. At the
same time, multimodal analysis poses many challenges not encountered in
modeling single modalities for understanding of videos (for e.g. building
complex models that can fuse spatial, temporal, and auditory information).
The workshop will be focused on video analysis/understanding related, but
not limited, to the following topics:

- deep network architectures for multimodal learning.

- multimodal unsupervised or weakly supervised learning from video.

- multimodal emotion/affect modeling in video.

- multimodal action/scene recognition in video.

- multimodal video analysis applications including but not limited to
sports video understanding, entertainment video understanding, healthcare
etc.

- multimodal embodied perception for vision (e.g. modeling touch and
video).

- multimodal video understanding datasets and benchmarks.


Papers should be limited to four pages, including figures and tables, in
the ICCV style and will not be archived in the conference proceedings. We
highly appreciate short forms of full papers accepted at ICCV as well as
unpublished ideas and concept papers. Please note that papers in this
workshop, as they will not be published in any proceedings and do not count
as publications, can still be submitted to next year's CVPR.

Organizers:
Dhiraj Joshi, IBM Research AI
Mathew Monfort, MIT CSAIL
Kandan Ramakrishnan, MIT CSAIL
Rogerio Schmidt Feris, IBM Research AI
David Harwath, MIT CSAIL
Dan Gutfreund, IBM Research AI
Carl Vondrick, Columbia University
Bolei Zhou, CUHK
Hang Zhou, MIT CSAIL
Zhicheng Yan, Facebook
Aude Oliva, MIT CSAIL

On behalf of the organizers,
Hilde Kuehne


--
Dr. Hilde Kuehne
MIT-IBM Watson Lab

Website:
http://researcher.watson.ibm.com/researcher/view.php?person=ibm-kuehne
Code & papers: https://hildekuehne.github.io

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20190712/0153dc33/attachment-0001.html>


More information about the visionlist mailing list