Overview

We offer a benchmark suite together with an evaluation server, such that authors can upload their results and get a ranking. We offer a Dataset that contains more than 50000 pictures, including 30462 images for training set, 10000 images for validation set and 10000 images for test set. If you would like to submit your results, please register, login, and follow the instructions on our submission page.

Note: We only display the highest submission of each person.

Single-Person Human Parsing Track

Metrics

We use four metrics from common semantic segmentation and scene parsing evaluations that are variations on pixel accuracy and region intersection over union (IU). The metrics are reported by FCN. The four metrics are Pixel accuracy(%) , Mean accuracy(%), Mean IoU(%) and Frequency weighted IoU(%). The details show per-class IoU(%).

Method Pixel accuracy Mean accuracy Mean IoU Frequency weighted IoU Details Abbreviation Submit Time
WhiskNet 86.16 57.95 47.74 76.45 Details Abbreviation 2017-06-03 14:22:05
Self-Supervised Neural Aggregation Networks 87.29 63.35 52.26 78.25 Details Abbreviation 2017-06-04 13:23:59
BUPTMM-Parsing 84.93 55.62 45.44 74.60 Details Abbreviation 2017-06-04 14:54:06
VSNet-SLab+Samsung 87.06 66.73 54.13 77.98 Details Abbreviation 2017-06-04 15:14:38