Gait Recognition

Prof. Shiqi Yu has worked on gait recognition for more than 15 years. He designed and created CASIA-B, which is now one of the most popular benchmarking datasets, with related publication. 

ReSGait

Introduction

To evaluate human gait in unconstrained scenarios, we release a real-scene gait dataset (ReSGait), which is the dataset collected in unconstrained scenarios without controlling any environmental parameters. Benchmark Code can be found at this Github repo.

Dataset Description

The ReSGait Dataset is composed of 172 subjects and 870 video sequences, recorded over 15 months. Moreover, the videos are labeled with gender, clothing, carrying conditions, the kind of walking route, and whether mobile phones were used or not. In ReSGait dataset, the subjects perform actions such as phone calls and jumping while they walk. Furthermore, subjects can freely choose their clothing, viewpoints can change due to walking routes, time spans can reflect changes in appearance, and many other situations are incorporated that make this dataset one of the most realistic and difficult datasets in the literature.

Each silhouette sequence has a corresponding pose or skeleton point sequence. Among them, the size of the silhouette image is 128x128, and human pose format is standard the COCO-18 key-point pose skeleton.

The camera was placed in a corner, ap-proximately 1.2 m above the ground. Videos were recorded from October 2017 to April 2019,and the time span is 15 months. During this period, thetemperature obviously changed along seasons, from13 ℃to35 ℃. It led to obvious changes in people’s clothes. Thetypical clothes were T-shirts and short pants in summer. Coats and long pants were commonly worn in winter.

Data description

Folder description
.
├── README.txt
├── label.csv
└── pose
    ├── 001
    │   └── 17_11_16_00.mat
    │   └── ...
    ├── 002
    └── ...
└── silhouette
        ├── 001
    │   └── 17_11_16_00
    │            └── normalization
    │                ├── 001.jpg
    │                               └── ...
    ├── 002
    └── ...
Detailed Label description

The detail label can be found in label.csv. The following is one row data from label.csv.

VideoID cloth phoneUse gender carry walkingRoute SubjectID ShootingDate
001_17_10_20_00 0 0 0 0 0 1 17_10_20_00
... ... ... ... ... ... ... ...

The VideoID is formated as SubjectID_year_month_day_order. For example, 001_17_10_20_00 means this video belongs to Subject 001, and it was collected on October 20, 2017. For ease of use, other detail labels will use numbers to distinguish categories.

#Covariate #Lable Description
cloth Normal / Coat / Skirt // 0 / 1 / 2
phoneUse No / Yes // 0 / 1
gender Male / Female // 0 / 1
carry Empty / Small item / Large item / Bag // 0 / 1 / 2 / 3
walkingRoute Straight / Curve // 0 / 1

Download

Link: http://cse.sustech.edu.cn/faculty/~yusq/ReSGaitDataset.zip

Distribution

The database is available for research purposes only.

Reference

@inproceedings{resgait,
 title={{ReSGait}: The Real-Scene Gait Dataset},
 author={Zihao Mu and Francisco M. Castro and Manuel J. Mar\'in-Jim\'enez and Nicol\'as Guil and Yan-ran Li and Shiqi Yu},
 booktitle={International Joint Conference on Biometrics (IJCB 2021)},
 year={2021}
}

 

Face Detection

Prof. Yu developed a fast and high accurate face detection algorithm. Duo to the high performance, the algorithm has been used widely in industry. The algorithm has also been released at https://github.com/ShiqiYu/libfacedetection and widely used worldwide.

Copyright © 2018 All Rights Reserved.