This is the code for the experiment 7 in the paper. (comparison between NFA and PCA in terms of reconstruction error)
This is only the research code, so could be very messy, and some of the codes and comments are only for lengcy. Use with your own risk.

Before running the code, you need to prepare the training and testing code, and specify the location in files nfa_config and test_nfa_config

*_nfa_config: this is the configuration file where you need to specify the necessary hyperparamters. Relatively important ones include: z_dim, batch_size, langevin step number for inference, langevin stepsize, learning rate...

(1) run test_nfa_face. This script will read the image and train the ABP model. After this step, the workspace will contains the inferred latent vector, syn_mats, for each training images and the learned net.  During the learning process, the algorithm will also generated the synthesised images and save the necessary models. Currently, we use the langevin sampling as our inference algorithm. Note that the codes also provide other possibilities, e.g., alter_grad, joint_grad, you can easily modify those parts to get those work. 

(2) go to test_reconstruct_PCA folder. In this folder, we will do the numerical comparison between ABP and PCA
In test_nfa_config, set up the necessary parameters, e.g., test_batch_size, langevin steps for testing step, etc. 
Then run test_nfa_config. This will first infer the latent representations of the test images, then based on the infered Z, do the reconstruction. (Note that in the current code, the error is actually average L1 norm, while in the paper, we further normalize it by dividing 2). 


