Cooperative Training of Fast Thinking Initializer and

Slow Thinking Solver for Conditional Learning



Jianwen Xie 1*, Zilong Zheng 2*, Xiaolin Fang 3, Song-Chun Zhu 2,4,5, and Ying Nian Wu 2

(* Equal contributions)
1 Cognitive Computing Lab, Baidu Research, USA
2 University of California, Los Angeles (UCLA), USA
3 Massachusetts Institute of Technology, USA
4 Tsinghua University, Beijing, China
5 Peking University, Beijing, China


Abstract

This paper studies the problem of learning the conditional distribution of a high-dimensional output given an input, where the output and input may belong to two different domains, e.g., the output is a photo image and the input is a sketch image. We solve this problem by cooperative training of a fast thinking initializer and slow thinking solver. The initializer generates the output directly by a non-linear transformation of the input as well as a noise vector that accounts for latent variability in the output. The slow thinking solver learns an objective function in the form of a conditional energy function, so that the output can be generated by optimizing the objective function, or more rigorously by sampling from the conditional energy-based model. We propose to learn the two models jointly, where the fast thinking initializer serves to initialize the sampling of the slow thinking solver, and the solver refines the initial output by an iterative algorithm. The solver learns from the difference between the refined output and the observed output, while the initializer learns from how the solver refines its initial output. We demonstrate the effectiveness of the proposed method on various conditional learning tasks, e.g., class-to-image generation, image-to-image translation, and image recovery. The advantage of our method over GAN-based methods is that our method is equipped with a slow thinking process that refines the solution guided by a learned objective function.

Fast Thinking and Slow Thinking Model

Diagram of fast thinking and slow thinking conditional learning. Given a condition, the initializer initializes the solver, which refines the initial solution. The initializer provides the initial solution via direct mapping (see 1), i.e., ancestral sampling, which is a fast thinking process, while the solver refines the initial solution via Langevin sampling that optimizes the objective function (see 2), which is a slow thinking process. The initializer learns the mapping from the solver’s refinement (see 3), while the solver learns the objective function by comparing to the observed solution (see 4).

Paper

The TPAMI journal paper can be downloaded here.

The TAPMI tex file can be downloaded here.

Code and Data

The Python code using TensorFlow can be downloaded Here.

If you wish to use our code, please cite the following paper: 

Cooperative Training of Fast Thinking Initializer and Slow Thinking Solver for Conditional Learning
Jianwen Xie*, Zilong Zheng*, Xiaolin Fang, Song-Chun Zhu, Ying Nian Wu
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2021

Experiments

Contents

Exp 1 : Category => Image
Exp 2 : Image => Image

Experiment 1: Category => Image

Generated digit-MNIST, fashion-MNIST, CIFAR-10 images. (left and middle:) Each column is conditioned on one class label and each row is a different synthesized sample. The size of the generated images is 28 × 28. (right:) Each row is conditioned on one category label. The first two columns are training images, and the remaining columns display generated images conditioned on their labels. The image size is 32×32 pixels. The categories are airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck from top to bottom.

       

Experiment 2: Image => Image

(a) generating images conditioned on architectural labels. (b) photo inpainting. (c) edges to shoe images generation. (d) sketch to photo face synthesis.

  

  


Reference

[1] Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu. "Cooperative Training of Descriptor and Generator Networks." IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 2018.

[2] Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu. "Cooperative Learning of Energy-Based Model and Latent Variable Model via MCMC Teaching." The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI). 2018.

[3] Jianwen Xie*, Yang Lu*, Song-Chun Zhu, Ying Nian Wu. "A Theory of Generative ConvNet." International Conference on Machine Learning (ICML). 2016. (*equal contribution)

[4] Tian Han*, Yang Lu*, Song-Chun Zhu, Ying Nian Wu. "Alternating Back-Propagation for Generator Network" The Thirty-First AAAI Conference on Artificial Intelligence (AAAI). 2017. (*equal contribution)

Top