Cooperative Learning of Energy-Based Model and Latent Variable Model via MCMC Teaching



Jianwen Xie 1,2, Yang Lu 1,3, Ruiqi Gao 1, and Ying Nian Wu 1

1 University of California, Los Angeles (UCLA), USA
2 Hikvision Research America
3Amazon RSML (Retail System Machine Learning) Group


Abstract

This paper proposes a cooperative learning algorithm to train both the undirected energy-based model and the directed latent variable model jointly. The learning algorithm interweaves the maximum likelihood algorithms for learning the two models, and each iteration consists of the following two steps: (1) Modified contrastive divergence for energy-based model: The learning of the energy-based model is based on the contrastive divergence, but the finite-step MCMC sampling of the model is initialized from the synthesized examples generated by the latent variable model instead of being initialized from the observed examples. (2) MCMC teaching of the latent variable model: The learning of the latent variable model is based on how the MCMC in (1) changes the initial synthesized examples generated by the latent variable model, where the latent variables that generate the initial synthesized examples are known so that the learning is essentially supervised. Our experiments show that the cooperative learning algorithm can learn realistic models of images.

Paper

The paper can be downloaded here.

Slides

The oral presentation can be downloaded here.

Code and Data

The code and data for CoopNet can be downloaded here.

The code and data for recovery experiment can be downloaded here.

Experiments

Contents

Exp 1 : Experiment on texture synthesis (homogeneous CoopNets)
Exp 2 : Experiment on scene and object synthesis (inhomogeneous CoopNets)
Exp 3 : Experiment on pattern completion

Experiment 1: Experiment on texture synthesis (stationary CoopNets)

 

Figure 1. Generating texture patterns. For each category, the first image displays the training image, and the rest are 3 of the images generated by the CoopNet algorithm.

Experiment 2: Experiment on scene and object synthesis (non-stationary CoopNets)

  

(a) observed images                                                                    (b) synthesized images

Figure 2. Images generated by CoopNets learned from 10 Imagenet scene categories. The training set consists of 1100 images randomly sampled from each category. The total number of training images is 11000.

  

Figure 3. Interpolation between latent vectors of the images on the two ends.

Table 1: Inception scores of different methods on learning from 10 Imagenet scene categories. n is the number of training images randomly sampled from each category

  n=50 n=100 n=300 n=500 n=700 n=900 n=1100
CoopNets 2.66 ± .13 3.04 ± .13 3.41 ± .13 3.48 ± .08 3.59 ± .11 3.65 ± .07 3.79 ± .15
DCGAN 2.26 ± .16 2.50 ± .15 3.16 ± .15 3.05 ± .12 3.13 ± .09 3.34 ± .05 3.47 ± .06
EBGAN 2.23 ± .17 2.40 ± .14 2.62 ± .08 2.46 ± .09 2.65 ± .04 2.64 ± .04 2.75 ± .08
W-GAN 1.80 ± .09 2.19 ± .12 2.34 ± .06 2.62 ± .08 2.86 ± .10 2.88 ± .07 3.14 ± .06
VAE 1.62 ± .09 1.63 ± .06 1.65 ± .05 1.73 ± .04 1.67 ± .03 1.72 ± .02 1.73 ± .02
InfoGAN 2.21 ± .04 1.73 ± .01 2.15 ± .03 2.42 ± .05 2.47 ± .05 2.29 ± .03 2.08 ± .04
DDGM 2.65 ± .17 1.05 ± .03 3.27 ± .14 3.42 ± .09 3.47 ± .13 3.41 ± .08 3.34 ± .11
Algorithm G 1.72 ± .07 1.94 ± .09 2.32 ± .09 2.40 ± .06 2.45 ± .05 2.54 ± .05 2.61 ± .06
Persistent CD 1.30 ± .08 1.94 ± .03 1.80 ± .02 1.53 ± .02 1.45 ± .04 1.35 ± .02 1.51 ± .02



  

Figure 3. Left: Average softmax class probability on single Imagenet category versus the number of training images. Middle: Top 5 classification error. Right: Average pairwise structural similarity.


Experiment 3: Experiment on pattern completion

   

(a) observed images                                                                    (b) synthesized images

Figure 4. Generating forest road images. The category is from MIT places205 dataset.

   

(a) observed images                                                                    (b) synthesized images

Figure 5. Generating hotel room images. The category is from MIT places205 dataset.

(a) face

(b) forest road

(c) hotel room

Figure 6. Pattern completion. First row: original images. Second row: occluded images. Third row: recovered images by CoopNets. (a) face. (b) forest road. (c) hotel room.

Table 2: Comparison of face recovery performances of different methods in 3 experiments

Exp task CoopNets DCGAN MRF-L1 MRF-L2 inter-1 inter-2 inter-3 inter-4 inter-5
error M30 0.115 0.211 0.132 0.134 0.120 0.120 0.265 0.120 0.120
M40 0.124 0.212 0.148 0.149 0.135 0.135 0.314 0.135 0.135
M50 0.136 0.214 0.178 0.179 0.170 0.166 0.353 0.164 0.164
PSNR M30 16.893 12.116 15.739 15.692 16.203 16.635 9.524 16.665 16.648
M40 16.098 11.984 14.834 14.785 15.065 15.644 8.178 15.698 15.688
M50 15.105 11.890 13.313 13.309 13.220 14.009 7.327 14.164 14.161

Acknowledgement

We acknowledge Dr. Song-Chun Zhu’s important contributions to the work presented in this paper. We thank a reviewer for his or her insightful comments. We thank Hansheng Jiang for her work on this project as a summer visiting student. We thank Tengyu Liu and Zilong Zheng for assistance with the inception score comparison experiments. The work is supported by NSF DMS 1310391, DARPA SIMPLEX N66001-15-C-4035, ONR MURI N00014-16-1- 2007, and DARPA ARO W911NF-16-1-0579.

Top