Personal Information

  • Name: Jingyu Shao
  • Email:

About me

I am now a Software Engineer at Google.

Before Google, I obtained my M.S. from UCLA, where I did Computer Vision research at Center for Vision, Cognition, Learning, and Autonomy advised by Prof. Song-chun Zhu. My thesis was on 3D Abstract Scene Synthesis from Sentences.

Before UCLA, I obtained my B.Eng. degree from Tsinghua University. I also did a summer internship at TTIC, working with Prof. Greg Shakhnarovich and Prof. Ayan Chakrabarti.


Jan 2016 - Dec 2017

University of California, Los Angeles

Master of Science, Department of Statistics

I am a member of Center for Vision, Cognition, Learning and Autonomy, advised by Prof. Song-chun Zhu. My current research lies in the intersection of computer vision and education, where I try to explore leveraging the state-of-the-art 3D sensors and technology for education purposes.

Aug 2011 - Jul 2015

Tsinghua University

Bachelor of Engineering, Department of Automation

I spent four amazing years at Tsinghua University. There, I both studied hard and played hard ;)


Jun 2017 - Sept 2017

Google, Mountain View

Ads Revenue Monitoring Based on BayesDLM & TensorFlow

  • Implemented two anomaly detectors for revenue monitoring; collaborated with Google Zurich (8000 Lines of Code)
  • BayesDLM detector: completed, whitelisted for testing; sped up the existing detector by ∼300 times (C++/Go/htlm/JS/Swig)
  • TensorFlow detector: based on ML/DL models for time series data; server prototype completed (Python/Go)

Jul 2016 - Sept 2016

iSee, MIT Innovation Center

AR & VR Game Development on Android

  • Developed 4 sample AR & VR games on Android; rendered recovered camera path in real time (C++/C#/Java, 2500 LOC)
  • Mitigated the risk of “tracking lost” for our AR platform by combination of visual SLAM and IMU measurements

Jul 2015 - Sept 2015

Toyota Technological Institute at Chicago

Monocular Depth Estimation with CNN

  • Trained a neural network on NYUv2 dataset for recovering single image depth map using only local features (Caffe,C++/Matlab)
  • Decreased the RMSE of depth estimation per pixel by 3.28% from state-of-the-art


NIPS 2016

Depth from a Single Image by Harmonizing Overcomplete Local Network Predictions

Ayan Chakrabarti, Jingyu Shao, and Greg Shakhnarovich

SIGGRAPH Asia Workshop 2016

A Virtual Reality Platform for Dynamic Human-Scene Interaction

Jingyu Shao, Jenny Lin, Xingwen Guo, Chenfanfu Jiang, Yixin Zhu, and Song-Chun Zhu


Mar 2017 - Jun 2017

A Light-weighted Code Visualization Tool for Java Programs

University of California, Los Angeles

  • Designed a code visualizaiton tool based on FAMIX meta-models; extracted hierarchies among files, classes & attributes by self-designed algorithms; rendered extracted structures with force-directed graphs and collapsible trees (Python/JS, 1000 LOC)
  • Evaluated running time and memory usage on 7 varisized Java projects; linear time complexity on project size

Mar 2017 - Jun 2017

Gender Classification by Voice and Speech Analysis

University of California, Los Angeles

  • Implemented an RNN model (LSTM) based on acoustic measurements and spectrogram (Tensorflow, Python/R, 800 LOC)
  • Achieved an accuracy of 99.95% on CMU ARCTIC dataset (4532 utterances); reduced training time by 63% after PCA

Feb 2017 - Dec 2017

A Virtual Reality Platform for Robot Autonomy

University of California, Los Angeles

  • Built a VR platform with Unreal, Oculus, Kinect and Leap Motion for robot experiment simulation (C++, 500 LOC)
  • Rendered real-world scenes provided by our collaborators in Glasgow; enabled liquid simulation in containers

Apr 2016 - Jun 2016

Environment Driven Tree Growth Simulation Based on L-system

University of California, Los Angeles

  • Generated realistic models of trees with string-based tree modeling in L-system, and used Unity engine to render the output
  • Project website: (C#, 600 LOC)

Oct 2015 - Dec 2015

EXP: Exploring a New Genre for Learning and Assessment Using Hybrid Reality

University of California, Los Angeles

We propose to develop hybrid reality learning and assessment (HRLA) environments to improve the learning process and assessment models in STEM (Science, Technology, Engineering and Math) education.

  • The HRLA interface with four learning modes
  • State-of-the-art computer vision algorithms for analyzing gestural and behavioral data during learning and interactions
  • Tactile glove with build-in force sensor
  • Empirical testing and evaluations in the lab and classroom

Jan 2015 - Jun 2015

Object cutout from multi-view images using level set of probabilities

Tsinghua University

A new multi-view image co-segmentation framework based on 3D level-sets driven by three independent and complementary terms, improving the segmentation masks as well as the geometric and photometric models derived from original images iteratively.

  • Ability to represent boundary with arbitrary topology
  • Improved by 4D light field image data structure
  • Flexibility in designing the energy function