Our Lab-site
Human Cyber Physical Intelligence Integration Lab @ SYSUThe LIP Dataset
We present a new large-scale dataset focusing on semantic understanding of person. The dataset is an order of magnitude larger and more challenge than similar previous attempts that contains 50,000 images with elaborated pixel-wise annotations with 19 semantic human part labels and 2D human poses with 16 key points. The images collected from the real-world scenarios contain human appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions. This challenge and benchmark are fully supported by the Human-Cyber-Physical Intelligence Integration Lab of Sun Yat-sen University.
Citation
If you use our datasets, please consider citing relevant papers:
"Instance-level Human Parsing via Part Grouping Network”
[Code]
Ke Gong, Xiaodan Liang, Yicheng Li, Yimin Chen, Ming Yang, Liang Lin;
European Conference on Computer Vision (ECCV Oral), 2018.
"Adaptive Temporal Encoding Network for Video Instance-level Human Parsing”
[Code]
Qixian Zhou, Xiaodan Liang, Ke Gong, Liang Lin;
ACM International Conference on Multimedia (ACM MM), 2018.
"Look into Person: Joint Body Parsing & Pose Estimation Network and A New Benchmark”
[Code]
Xiaodan Liang, Ke Gong, Xiaohui Shen, and Liang Lin;
IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2018.
"Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing"
[Code]
Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, Liang Lin;
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017).
“Human Parsing With Contextualized Convolutional Neural Network”
Xiaodan Liang, Chunyan Xu, Xiaohui Shen, Jianchao Yang, Si Liu, Jinhui Tang, Liang Lin, Shuicheng Yan;
IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), DOI: 10.1109/TPAMI.2016.2537339, 2016.
News
- May 24, 2018 Deadline of our three challenge has been postponed to June 10, 2018 23:59 UTC/GMT+0 .
- April 03, 2018 We have opened a new track for Multi-Person Human Parsing and now this track is ready for submission.
- March 31, 2018 Welcome to our CVPR'18 workshop on Visual Understanding of Humans in Crowd Scene and the 2nd Look Into Person (LIP) Challenge
- January 05, 2018 We will organize a workshop at CVPR 2018.
- August 31, 2017 The LIP Multiple-Human Dataset is released!
- March 05, 2017 Welcome to our CVPR'17 workshop on Visual Understanding of Humans in Crowd Scene and the 1st Look Into Person (LIP) Challenge
- January 07, 2017 We will organize a workshop at CVPR 2017.
- January 01, 2017 The Look Into Person Challenge is now open for submission.
- December 01, 2016 The Look Into Person website is online!
License
This LIP Dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree to our license terms.