Jianwen Xie ^{1,2},
Yang Lu ^{1,3},
Ruiqi Gao ^{1},
Song-Chun Zhu ^{1},
and Ying Nian Wu ^{1}

^{1} University of California, Los Angeles, USA

^{2} Hikvision Research Institute, Santa Clara, USA

^{3}Facebook, USA

This paper studies the cooperative training of two generative models for image modeling and synthesis. Both models are parametrized by convolutional neural networks (ConvNets). The first model is a deep energy-based model, whose energy function is defined by a bottom-up ConvNet, which maps the observed image to the energy. We call it the descriptor network. The second model is a generator network, which is a non-linear version of factor analysis. It is defined by a top-down ConvNet, which maps the latent factors to the observed image. The maximum likelihood learning algorithms of both models involve MCMC sampling such as Langevin dynamics. We observe that the two learning algorithms can be seamlessly interwoven into a cooperative learning algorithm that can train both models simultaneously. Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model. After that, the generator model learns from how the MCMC changes its synthesized examples. That is, the descriptor model teaches the generator model by MCMC, so that the generator model accumulates the MCMC transitions and reproduces them by direct ancestral sampling. We call this scheme MCMC teaching. We show that the cooperative algorithm can learn highly realistic generative models.

The code for CoopNets algorithm can be downloaded from: (i) Matlab with MatConvNet, (ii) Python with Tensorflow, and (iii) Python with Pytorch

The code for *Spatial-temporal CoopNets* can be downloaded from: Python with Tensorflow.

If you wish to use our code, please cite the following papers:

Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu

Jianwen Xie, Yang Lu, Ruiqi Gao, Ying Nian Wu

The tex file can be downloaded here.

Contents Exp 1 : Experiment on generating texture patternsExp 2 : Experiment on generating object patternsExp 3 : Experiment on generating scene patternsExp 4 : Experiment on generating handwritten digitsExp 5 : Experiment on large-scale benchmark datasetsExp 6 : Experiment on pattern completionExp 7 : Experiment on generating dynamic textures |

LSUN | CelebA | Cifar-10 | |

W-GAN | 67.72 | 52.54 | 48.40 |

DCGAN | 70.40 | 21.40 | 37.70 |

VAE | 243.47 | 50.53 | 126.32 |

Generator in CoopNets (ours) | 64.30 | 16.98 | 35.25 |

Descriptor in CoopNets (ours) | 35.42 |
16.65 |
33.61 |

Exp | task | CoopNets | DCGAN | MRF-L1 | MRF-L2 | inter-1 | inter-2 | inter-3 | inter-4 | inter-5 |

error | M30 | 0.115 |
0.211 | 0.132 | 0.134 | 0.120 | 0.120 | 0.265 | 0.120 | 0.120 |

M40 | 0.124 |
0.212 | 0.148 | 0.149 | 0.135 | 0.135 | 0.314 | 0.135 | 0.135 | |

M50 | 0.136 |
0.214 | 0.178 | 0.179 | 0.170 | 0.166 | 0.353 | 0.164 | 0.164 | |

PSNR | M30 | 16.893 |
12.116 | 15.739 | 15.692 | 16.203 | 16.635 | 9.524 | 16.665 | 16.648 |

M40 | 16.098 |
11.984 | 14.834 | 14.785 | 15.065 | 15.644 | 8.178 | 15.698 | 15.688 | |

M50 | 15.105 |
11.890 | 13.313 | 13.309 | 13.220 | 14.009 | 7.327 | 14.164 | 14.161 |

We thank Hansheng Jiang, Zilong Zheng, Erik Nijkamp, Tengyu Liu, Yaxuan Zhu, Zhaozhuo Xu and Xiaolin Fang for their assistance with coding and experiments. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. The work is supported by Hikvision gift fund, NSF DMS 1310391, DARPA SIMPLEX N66001-15-C-4035, ONR MURI N00014-16-1-2007, and DARPA ARO W911NF-16-1-0579.