The research of object recognition is heading towards a more and more detailed understanding of the content in an natural image. Semantic part representation can provide crucial cues and knowledgeable information about the objects, which are very helpful for solving many vision tasks, such as detection, tracking, pose estimation and even fine-grained recognition. With increasing interests in the area of semantic part segmentation, it is a great necessity of a platform for researchers to evaluate their segmentation methods. The platform should have a publicly available dataset of challenging images and annotations with well defined semantic parts, and a standard evaluation methodology so that performance of algorithms can be compared.
Here we propose the PASCAL Semantic Part (PASPart) dataset, which provides semantic part segmentation of 20 categories from the PASCAL VOC2010 dataset. To ensure fair comparisons, we also build a benchmark, together with an evaluation server. The benchmark currently uses 7 articulated categories, due to their popularity in part-based methods and significant variability in terms of their poses and the sizes and shapes of their parts. An evaluation toolkit is also provided to enable a “plug and play” training and testing harness.
If you use our toolkit, please cite: Peng Wang, Xiaohui Shen, Zhe Lin, Scott Cohen, Brian Price, Alan Yuille, Joint Object and Part Segmentation using Deep Learned Potentials, ICCV, 2015.
If you use our evaluation server, please cite: Xiaobai Liu, Nam-Gyu Cho, Peng Wang, Xiaochen Lian, Junhua Mao, Alan Yuille, Seong-Whan Lee, PASCAL Semantic Part: Dataset and Benchmark.
Dataset and Evaluation Server: