Find information:

[1-22]Context Model for 3D Scene Understanding

Date:2017-01-20

  TitleContext Model for 3D Scene Understanding

  Speaker: Yinda Zhang,   Department of Computer Science,  Princeton University

  Time10:00 amSunday, January 22, 2017

  Venue: The First Conference Room, Level 4, Building 5, Institute of Software, Chinese Academy of Sciences.

  

  Abstract:

While deep neural networks have led to human-level performance on computer vision tasks, they have yet to demonstrate similar gains for holistic scene understanding. In particular, 3D context has been shown to be an extremely important cue for scene understanding - yet very little research has been done on integrating context information with deep models. This work presents an approach to embed 3D context into the topology of a neural network trained to perform holistic scene understanding. Given a depth image depicting a 3D scene, our network aligns the observed scene with a predefined 3D scene template, and then reasons about the existence and location of each object within the scene template. In doing so, our model recognizes multiple objects in a single forward pass of a 3D convolutional neural network, capturing both global scene and local object information simultaneously. To create training data for this 3D network, we generate partly hallucinated depth images which are rendered by replacing real objects with a repository of CAD models of the same object category. Extensive experiments demonstrate the effectiveness of our algorithm compared to the state-of-the-arts.

  Short Bio: 

Yinda Zhang is a 3rd-year PhD student at Princeton University, advised by Professor Thomas Funkhouser and Professor Jianxiong Xiao. Before that, he received a Bachelor degree from Dept. Automation, Tsinghua University and a Master degree from Dept. ECE, National University of Singapore under the supervision of Prof. Ping Tan and Prof. Shuicheng Yan. He is currently working on 3D context model, 3D deep learning, and scene understanding.

  

  All are welcome!