` Human Interaction from Motion Trajectories

Inferring Human Interaction
from Motion Trajectories in Aerial Videos

Tianmin Shu1, Yujia Peng2, Lifeng Fan1, Hongjing Lu1,2 and Song-Chun Zhu1

Department of Statistics, UCLA1

Department of Pyschology, UCLA2

Computational Modelling Prize (Perception/Action Category), Cognitive Science Society, 2017

Introduction

Abstract

People are adept at perceiving interactions from movements of simple shapes but the underlying mechanism remains unknown. Previous studies have often used object movements defined by experimenters. The present study used aerial videos recorded by drones in a real-life environment to generate decontextualized motion stimuli. Motion trajectories of displayed elements were the only visual input. We measured human judgments of interactiveness between two moving elements, and the dynamic change of such judgments over time. A hierarchical model was developed to account for human performance in this task, which represents interactivity using latent variables, and learns the distribution of critical movement features that signal potential interactivity. The model provides a good fit to human judgments and can also be generalized to the original Heider-Simmel animations (1944). The model can also synthesize decontextualized animations with controlled degree of interactiveness, providing a viable tool for studying animacy and social perception.

Paper and Demo

Paper

Tianmin Shu, Yujia Peng, Lifeng Fan, Hongjing Lu and Song-Chun Zhu. Inferring Human Interaction from Motion Trajectories in Aerial Videos. 39th Annual Meeting of the Cognitive Science Society (CogSci), 2017. [PDF] [slides]

@inproceedings{ShuCogSci17,
  title     = {Inferring Human Interaction from Motion Trajectories in Aerial Videos},
  author    = {Tianmin Shu and Yujia Peng and Lifeng Fan and Hongjing Lu and Song-Chun Zu},
  booktitle = {39th Annual Meeting of the Cognitive Science Society (CogSci)},
  year      = {2017}
}

Stimulus Illustration

Field Parsing of Heider-Simmel Animations

Inference of Interactiveness on Aerial Video Stimuli

Synthesis Examples (based on the Model Trained on Aerial Videos)

Data

1. UCLA Aerial Event Dataset

You may request to download the dataset through this link.

Please cite this paper if you use the dataset:

Tianmin Shu, Dan Xie, Brandon Rothrock, Sinisa Todorovic and Song-Chun Zhu. Joint inference of groups, events and human roles in aerial videos. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.

2. Motion Trajectories in the Heider-Simmel Annimation

You may download this dataset from here.

Please cite this paper if you use the dataset:

Tianmin Shu, Yujia Peng, Lifeng Fan, Hongjing Lu and Song-Chun Zhu. Inferring Human Interaction from Motion Trajectories in Aerial Videos. 39th Annual Meeting of the Cognitive Science Society (CogSci), 2017.

Contact

Any questions? Please contact Tianmin Shu (tianmin.shu [at] ucla.edu)