Overview

We offer a benchmark suite together with an evaluation server, such that authors can upload their results and get a ranking. We offer a Dataset that contains more than 50000 pictures, including 30462 images for training set, 10000 images for validation set and 10000 images for test set. If you would like to submit your results, please register, login, and follow the instructions on our submission page.

Note: We only display the highest submission of each person.

Single-Person Human Parsing Track

Metrics

We use four metrics from common semantic segmentation and scene parsing evaluations that are variations on pixel accuracy and region intersection over union (IU). The metrics are reported by FCN. The four metrics are Pixel accuracy(%) , Mean accuracy(%), Mean IoU(%) and Frequency weighted IoU(%). The details show per-class IoU(%).

Method Pixel accuracy Mean accuracy Mean IoU Frequency weighted IoU Details Abbreviation Submit Time
AttEdgeNet 87.40 67.17 54.17 78.55 Details Abbreviation 2018-04-09 04:28:56
Attention 84.52 54.83 44.60 74.03 Details Abbreviation 2018-05-03 15:45:48
n_v3 86.73 61.93 51.53 77.30 Details Abbreviation 2018-06-04 01:58:10
JD_BUPT 87.42 65.86 54.44 78.34 Details Abbreviation 2018-06-10 12:59:11
densenet&deeplabv3+ 81.56 49.56 37.92 70.65 Details Abbreviation 2018-05-27 05:33:01
no-ssl clean data 10w 83.73 53.13 42.69 73.04 Details Abbreviation 2018-05-30 07:46:04
xNet 81.45 48.14 35.76 70.55 Details Abbreviation 2018-06-07 09:33:12
refine_net 82.01 46.88 37.43 70.57 Details Abbreviation 2018-06-10 13:05:12
PSPse 88.92 67.78 57.90 80.59 Details Abbreviation 2018-06-10 15:52:56
80.26 48.12 35.08 68.16 Details Abbreviation 2018-06-07 07:25:09