Joey (Chengcheng) Yu

Ph.D Candidate
Vision, Cognition, Learning and Autonomy
Department of Statistics
University of California, Los Angeles

About Me

Currently I'm a fourth year PhD student in Department of Statistics at UCLA . I'm working in Center for Vision, Cognition, Learning and Autonomy (VCLA) , advised by Prof. Song-Chun Zhu .

 

Office:9401 Bolter Hall, UCLA

Email:chengchengyu at ucla.edu

Research Interest

My research interests include Pattern recognition, Computer vision, expecially in camera calibration and 3D scene reconstruction. My current research mainly focuses on 3D Scene Reconstruction from A Single RGB image.

Current research project

 

Commonsense Reasoning for Predicting What and Where in Single RGB

In this project our Input: Single Image

Output:

- Camera parameters.

- Optimal 2D Parsing results.

- 3D Measurement.

- Commonsense used for understanding the given image.

In addition, we also have the following objectives:

To solve the optimal 2d Parsing results.

To get the 3D scene model and measurements for semantic objects/parts.

And Finally to predict the commonsense used for understanding this given image.

 

Single-View 3D Scene Reconstruction

A:Input image

B:Novel view of 3D lines

C:Novel view of the input image

D:The recovered depth map

 

Publication

Detecting Potential Falling Objects by Inferring Human Action and Natural Disturbance

 

Detecting potential dangers in the environment is a fundamental ability of living beings. In order to endure such ability to a robot, this paper presents an algorithm for detecting potential falling objects, i.e. physically unsafe objects, given an input of 3D point clouds captured by the range sensors. We formulate the falling risk as a probability or a potential that an object may fall given human action or certain natural disturbances, such as earthquake and wind. Our approach differs from traditional object detection paradigm, it first infers hidden and situated causes (disturbance) of the scene, and then introduces intuitive physical mechanics to predict possible effects (falls) as consequences of the causes. In particular, we infer a disturbance field by making use of motion capture data as a rich source of common human pose movement. We show that, by applying various disturbance fields, our model achieves a human level recognition rate of potential falling objects on a dataset of challenging and realistic indoor scenes.


Bo Zheng*, Yibiao Zhao*, Joey C. Yu, Katsushi Ikeuchi and Song-Chun Zhu

[Paper]

 

 

 


Scene Understanding by Reasoning Geometry and Physics

 

In this paper, we present an approach for scene understanding by reasoning physical stability of objects from point cloud. We utilize a simple observation that, by human design, objects in static scenes should be stable with respect to gravity. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Our method consists of two major steps: 1) geometric reasoning: recovering solid 3D volumetric primitives from defective point cloud; and 2) physical reasoning: grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior. We propose to use a novel discon- nectivity graph (DG) to represent the energy landscape and use a Swendsen-Wang Cut (MCMC) method for optimization. In experiments, we demonstrate that the algorithm achieves substantially better performance for i) object segmentation, ii) 3D volumetric recovery of the scene, and iii) better parsing result for scene understanding in comparison to state-of-the-art methods in both public dataset and our own new dataset.

 

Bo Zheng*, Yibiao Zhao*, Joey C. Yu, Katsushi Ikeuchi and Song-Chun Zhu

[Paper]

Patent

Application in the process!

-->