Extend to May 30, 2019, 11:59 PM GMT

We offer a benchmark suite together with an evaluation server, such that authors can upload their results and get a ranking. We offer a Video Multi-Person Human Parsing Dataset that contains 404 video sequences, including 304 sequences for training set, 50 sequences for validation set and 50 sequences for test set. The ground-truth of train set are offered. If you would like to submit your results on test set, please register, login, and follow the instructions on our submission page.

Note: We only display the highest submission of each person.

Video Multi-Person Human Parsing Track


For this task, We adopt standard intersection over union (IoU) criterion for evaluation of global human parsing. Following Mask-RCNN, we used the mean value of several mean Average Precision(mAp) with IOU thresholds from 0.5 to 0.95 in increments of 0.05 for evaluation of human instance segmentation, referred as APrh. Similarly, the mean of the mAp scores for overlap thresholds varying from 0.1 to 0.9 in increments of 0.1, noted as APr is used to evaluate instance-level human parsing. The details show APrh and APr at various IoU thresholds.

Method Mean IoU Mean APrh Mean APr Details Abbreviation Submit Time
mix_en_sni_dou 40.82 59.87 24.08 Details Abbreviation 2019-02-13 14:20:07
test4 37.91 59.87 24.08 Details Abbreviation 2019-03-11 11:40:22
20190408a_epoch073 39.19 62.81 26.08 Details Abbreviation 2019-04-10 15:26:25
testmixg 41.02 59.87 24.08 Details Abbreviation 2019-05-01 14:48:08
55.77 76.33 48.72 Details Abbreviation 2019-05-07 06:51:39
44.32 63.83 28.58 Details Abbreviation 2019-05-07 10:35:46