Personal Information

  • Name: Jingyu Shao
  • Address:
    Room 9434, Boelter Hall
    University of California, Los Angeles
    Los Angeles, CA 90095
  • Email: shaojy15@ucla.edu

About me

I am a Master student at Department of Statistics, UCLA. I am currently doing Computer Vision research at Center for Vision, Cognition, Learning, and Autonomy advised by Prof. Song-chun Zhu.

Before I came to UCLA, I obtained my B.Eng. degree from Tsinghua University. I also did a summer internship at TTIC, working with Prof. Greg Shakhnarovich and Prof. Ayan Chakrabarti.

Education

Jan 2016 - Present

University of California, Los Angeles

Master of Science, Department of Statistics

I am a member of Center for Vision, Cognition, Learning and Autonomy, advised by Prof. Song-chun Zhu. My current research lies in the intersection of computer vision and education, where I try to explore leveraging the state-of-the-art 3D sensors and technology for education purposes.

Aug 2011 - July 2015

Tsinghua University

Bachelor of Engineering, Department of Automation

I spent four amazing years at Tsinghua University. There, I both studied hard and played hard ;)

Internships

June 2017 - Sept 2017

Google, Mountain View

Ads Revenue Monitoring Based on BayesDLM & TensorFlow

  • Implemented two anomaly detectors for revenue monitoring; collaborated with Google Zurich

July 2016 - Sept 2016

iSee, MIT Innovation Center

AR & VR Game Development on Android

  • Developed 4 sample AR & VR games on Android; mitigated the risk of “tracking lost” for our AR platform by combination of visual SLAM and IMU measurements

July 2015 - Sept 2015

Toyota Technological Institute at Chicago

Monocular Depth Estimation with CNN

  • Trained a neural network on NYUv2 dataset for recovering single image depth map using only local features; decreased the RMSE of depth estimation per pixel by 3.28% from state-of-the-art

Publications

NIPS 2016

Depth from a Single Image by Harmonizing Overcomplete Local Network Predictions

Ayan Chakrabarti, Jingyu Shao, and Greg Shakhnarovich

SIGGRAPH Asia Workshop 2016

A Virtual Reality Platform for Dynamic Human-Scene Interaction

Jingyu Shao, Jenny Lin, Xingwen Guo, Chenfanfu Jiang, Yixin Zhu, and Song-Chun Zhu

Projects

March 2017 - June 2017

A Light-weighted Code Visualization Tool for Java Programs

University of California, Los Angeles

  • Designed a code visualizaiton tool based on FAMIX meta-models; extracted hierarchies among files, classes & attributes by self-designed algorithms; rendered extracted structures with force-directed graphs and collapsible trees (Python/JS, 1000 LOC)
  • Evaluated running time and memory usage on 7 varisized Java projects; linear time complexity on project size

March 2017 - June 2017

Gender Classification by Voice and Speech Analysis

University of California, Los Angeles

  • Implemented an RNN model (LSTM) based on acoustic measurements and spectrogram (Tensorflow, Python/R, 800 LOC)
  • Achieved an accuracy of 99.95% on CMU ARCTIC dataset (4532 utterances); reduced training time by 63% after PCA

Feb 2017 - Present

A Virtual Reality Platform for Robot Autonomy

University of California, Los Angeles

  • Built a VR platform with Unreal, Oculus, Kinect and Leap Motion for robot experiment simulation (C++, 500 LOC)
  • Rendered real-world scenes provided by our collaborators in Glasgow; enabled liquid simulation in containers

April 2016 - June 2016

Environment Driven Tree Growth Simulation Based on L-system

University of California, Los Angeles

  • Generated realistic models of trees with string-based tree modeling in L-system, and used Unity engine to render the output
  • Project website: http://www.stat.ucla.edu/~jingyushao/CS275/ (C#, 600 LOC)

Oct 2015 - Dec 2015

EXP: Exploring a New Genre for Learning and Assessment Using Hybrid Reality

University of California, Los Angeles

We propose to develop hybrid reality learning and assessment (HRLA) environments to improve the learning process and assessment models in STEM (Science, Technology, Engineering and Math) education.

  • The HRLA interface with four learning modes
  • State-of-the-art computer vision algorithms for analyzing gestural and behavioral data during learning and interactions
  • Tactile glove with build-in force sensor
  • Empirical testing and evaluations in the lab and classroom

Jan 2015 - June 2015

Object cutout from multi-view images using level set of probabilities

Tsinghua University

A new multi-view image co-segmentation framework based on 3D level-sets driven by three independent and complementary terms, improving the segmentation masks as well as the geometric and photometric models derived from original images iteratively.

  • Ability to represent boundary with arbitrary topology
  • Improved by 4D light field image data structure
  • Flexibility in designing the energy function