[visionlist] [meetings] Call for Participants - ACRV Robotic Vision Scene Understanding Challenge

David Hall d20.hall at qut.edu.au
Thu Sep 3 02:27:15 -04 2020


Call for Participants - ACRV Robotic Vision Scene Understanding Challenge
==================================================

Dear Researchers,

This is a call for participants for the latest ACRV robotic vision challenge on scene understanding.

Eval AI Challenge Link: https://evalai.cloudcv.org/web/challenges/challenge-page/625/overview

Challenge Overview Webpage: https://nikosuenderhauf.github.io/roboticvisionchallenges/scene-understanding

Deadline: The deadline for the challenge has been extended to October 2nd.

Prizes: 1 Titan RTX and up to 5 Jetson Nano GPUs for two winning teams provided by NVIDIA + $5,000 USD to be divided amongst high-performing competitors provided by the ACRV


Challenge Overview
-----------------------

The Robotic Vision Scene Understanding Challenge evaluates how well a robotic vision system can understand the semantic and geometric aspects of its environment. The challenge consists of two distinct tasks: Object-based Semantic SLAM, and Scene Change Detection.

Key features of this challenge include:

  *   BenchBot, a complete software stack for running semantic scene understanding algorithms.
  *   Running algorithms in realistic 3D simulation, and on real robots, with only a few lines of Python code.
  *   Tiered difficulty levels to allow for easy of entry to robotic vision with embodied agents and enabling ablation studies.
  *   The BenchBot API, which allows simple interfacing with robots and supports OpenAI Gym-style approaches and a simple object-oriented Agent approach.
  *   Easy-to-use scripts for running simulated environments, executing code on a simulated robot, evaluating semantic scene understanding results, and automating code execution across multiple environments.
  *   Opportunities for the best teams to execute their code on a real robot in our lab, which uses the same API as the simulated robot.
  *   Use of the Nvidia Isaac SDK for interfacing with, and simulation of, high fidelity 3D environments.

Object-based Semantic SLAM: Participants use a robot to traverse around the environment, building up an object-based semantic map from the robot’s RGBD sensor observations and odometry measurements.

Scene Change Detection: Participants use a robot to traverse through an environment scene, building up a semantic understanding of the scene. Then the robot is moved to a new start position in the same environment, but with different conditions. Along with a possible change from day to night, the new scene has a number objects added and / or removed. Participants must produce an object-based semantic map describing the changes between the two scenes.

Difficulty Levels: We provide three difficulty levels of increasing complexity and similarity to true active robotic vision systems. At the simplest difficulty level, the robot moves to pre-defined poses to collect data and provides ground-truth poses, removing the need for active exploration and localization. The second level requires active exploration and robot control but still provides ground-truth pose to remove localization requirements. The final mode is the same as the previous but provides only noisy odometry information, requiring localization to be calculated by the system.

Information Videos
-----------------------
https://youtu.be/jQPkV29KFvI
https://youtu.be/LNEvhpWerJQ

Contact Details
------------------
E-mail: contact at roboticvisionchallenge.org
Webpage: https://roboticvisionchallenge.org
Slack: https://tinyurl.com/rvcslack
Twitter: @robVisChallenge
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200903/94b681e5/attachment.html>


More information about the visionlist mailing list