ETHZ Dataset for Appearance-Based Modeling

The experimental results for the paper Learning Discriminative Appearance-Based Models Using Partial Least Squares in SIBGRAPI'2009 (link to the research page) were obtained using the ETHZ dataset, which provides a large number of different people captured in uncontrolled conditions. The video sequences are captured from moving cameras, which provides a range of variations in people's appearances.

Data

We used the ground truth location of people in the video to crop each person, then we created a directory containing samples of each person (p0?? - p0??) for each video sequence. The samples in the directories have the original size, but in our experiments they were resized to 32x64 pixels. In our experiments, we chose one of the samples of each person to learn the appearance-based model and the remaining samples for classification (this procedure was repeated few times and the average was used). The results were given by the overall recognition rate. Next figure shows few examples of cropped samples contained in the first video sequence of the dataset.

person 1 (p001):

 

person 14 (p014):

 

person 23 (p023):

 

Cropped samples used from all three sequences: [zip file (146 MB)]

References

You should include the following references if you use this dataset in your work:

W.R. Schwartz, L.S. Davis. Learning Discriminative Appearance-Based Models Using Partial Least Squares. Proceedings of the XXII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'2009), Rio de Janeiro, Brazil, October 11-14, 2009. [pdf]

Note: The samples used in the experiments were obtained from the ETHZ dataset, containing three video sequences used in the paper Depth and Appearance for Mobile Scene Analysis. A. Ess and B. Leibe and L. Van Gool. ICCV'07, so you also need to cite their paper when using this dataset (see instructions here).