Overview

We offer a benchmark suite together with an evaluation server, such that authors can upload their results and get a ranking. We offer a Dataset that contains more than 50000 pictures, including 30462 images for training set, 10000 images for validation set and 10000 images for test set. If you would like to submit your results, please register, login, and follow the instructions on our submission page.

Note: We only display the highest submission of each person.

Single-Person Human Parsing Track

Metrics

We use four metrics from common semantic segmentation and scene parsing evaluations that are variations on pixel accuracy and region intersection over union (IU). The metrics are reported by FCN. The four metrics are Pixel accuracy(%) , Mean accuracy(%), Mean IoU(%) and Frequency weighted IoU(%). The details show per-class IoU(%).

Method Pixel accuracy Mean accuracy Mean IoU Frequency weighted IoU Details Abbreviation Submit Time
test1111 45.61 22.97 13.27 35.76 Details Abbreviation 2019-03-11 11:11:13
SHP 89.10 72.23 60.51 80.97 Details Abbreviation 2019-03-17 05:41:39
Baseline 89.59 70.83 60.45 81.70 Details Abbreviation 2019-03-18 16:58:30
PDP 85.23 64.94 51.85 75.31 Details Abbreviation 2019-03-12 10:47:34
L_TU 87.70 65.81 55.35 78.67 Details Abbreviation 2019-03-13 09:04:00
base 87.63 64.97 53.91 78.79 Details Abbreviation 2019-03-13 10:06:53
what? 79.72 42.31 36.53 66.09 Details Abbreviation 2019-03-14 13:51:19