(1) data, codes, and readme
The above codes reproduce results in the paper.
(1.1) improved code for adaboost
(1.2) improved code for active basis with faster learning and testing
(1.3) improved code for active correlation with faster learning and testing
(2) a much faster implementation for learning
with -logF whitening
The results are slightly better than
simple saturation thresholding.
with sigmoid transformation
The above codes reproduce results in the ppt. The model is log(p/q) = lambda X sigmoid(r) - log Z(lambda).
(3.1) improved code for sigmoid
(3.2) tiny images
Adaboost with active basis
The features used by adaboost are local maxima of Gabor responses, the same as in
(4.1) improved code for adaboost with local maximum pooling
Learned templates by active correlation, log-likelihood, adaboost, and adaboost with active basis.
Reproduced by (1) (the last one by (4)).
Some training examples and the corresponding
Comparing area under curve of ROC
ROC for sigmoid log-likelihood and adaboost.
Adaboost with 80 elements = .936. Reproduced by (1)
Active basis with 40 elements
a) log-likelihood with thresholding = .941. Reproduced by (1)
b) active correlation with thresholding = .971. Reproduced by (1)
c) log-likelihood with whitening = .965. Reproduced by (2)
d) active correlation with whitening = .975. Reproduced by (2)
e) log-likelihood with sigmoid model = .977. Reproduced by (3)
f) adaboost with active basis = .937. Reproduced by (4)
(5) active basis for horse
(6) adaboost with max