[visionlist] Call for papers: First Large Scale Holistic Video Understanding Workshop @ ICCV, 2019
Vivek Sharma
vvsharma at mit.edu
Wed Jun 26 08:49:08 -04 2019
First Large Scale Holistic Video Understanding Workshop @ICCV’19
https://holistic-video-understanding.github.io/workshops/iccv2019.html
Date: October 27, 2019
PAPER SUBMISSION IS NOW OPEN!
PAPER and ABSTRACT SUBMISSION DEADLINE: August 1, 2019
ACCEPTANCE NOTIFICATION: August 15, 2019
Please submit papers via CMT: https://cmt3.research.microsoft.com/HVU2019/
WORKSHOP REGISTRATION: In conjunction with ICCV’19
*Best Paper and Best Poster Awards will be granted.
OVERVIEW:
In the last years, we have seen tremendous progress in the capabilities of computer systems to classify video clips taken from the Internet or to analyze human actions in videos. There are lots of works in video recognition field focusing on specific video understanding tasks, such as action recognition, scene understanding, etc. There have been great achievements in such tasks, however, there has not been enough attention toward the holistic video understanding task as a problem to be tackled. Current systems are expert in some specific fields of the general video understanding problem. However, for real-world applications, such as, analyzing multiple concepts of a video for video search engines and media monitoring systems or providing an appropriate definition of the surrounding environment of a humanoid robot, a combination of current state-of-the-art methods should be used. Therefore, in this workshop, we intend to introduce holistic video understanding as a new challenge for the video understanding efforts. This challenge focuses on the recognition of scenes, objects, actions, attributes, and events in the real world user-generated videos. To be able to address such tasks, we also introduce our new dataset named Holistic Video Understanding (HVU dataset) that is organized hierarchically in a semantic taxonomy of holistic video understanding. Almost all of the real-world conditioned video datasets are targeting human action or sport recognition. So our new dataset can help the vision community and bring more attention to bring more interesting solutions for holistic video understanding. The workshop is tailored to bringing together ideas around multi-label and multi-task recognition of different semantic concepts in the real world videos. And the research efforts can be tried on our new dataset. HVU Dataset: https://github.com/holistic-video-understanding/Mini-HVU
Topics:
* Large scale video understanding
* Multi-Modal learning from videos
* Multi-concept recognition from videos
* Multi-task deep neural networks for videos
* Learning holistic representation from videos
* Weakly supervised learning from web videos
* Object, scene and event recognition from videos
* Unsupervised video visual representation learning
* Unsupervised and self-supervised learning with videos
INVITED SPEAKERS:
* Rahul Sukthankar (Google AI, CMU)
* Kristen Grauman (U. Texas at Austin, Facebook AI) - TBC
* Carl Vondrick (Columbia University)
* Juan Carlos Niebles (Stanford University) -TBC
* Manohar Paluri (Facebook AI)
SPONSORERS:
* Facebook AI Research
* Sensifai
For questions about the HVU workshop, please contact sharma.vivek at live.in<mailto:sharma.vivek at live.in>. Also, follow HVU on Twitter for the latest news: @LSHVU
Organizers:
Vivek Sharma, KIT, MIT
Mohsen Fayyaz, University of Bonn
Ali Diba, KU Leuven
Manohar Paluri, Facebook AI
Juergen Gall, University of Bonn
Rainer Stiefelhagen, KIT
Luc Van Gool, ETH Zurich & KU Leuven
best, Vivek
--
M.Sc. Vivek Sharma,
Karlsruhe Institute of Technology (KIT), Germany
Massachusetts Institute of Technology (MIT), USA
Web: http://media.mit.edu/~vvsharma
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20190626/5b8eba7f/attachment-0001.html>
More information about the visionlist
mailing list