Examples of human detection: ---------------------------- (samples are from the ETHZ dataset) - single file: detectorPLS.exe -d -c Config.hd.INRIA.64x128.2s.txt -i input/humans/image_00000696_0.png -o ./ -b -s (detect humans in input/humans/image_00000696_0.png and write the result to the current dir) - multiple files in a directory: detectorPLS.exe -d -c Config.hd.INRIA.64x128.2s.txt -i input/humans -o output/ -b -s (detect humans from images in input/humans, write result to output directory) Note: if you don't want to run software to learn the PLS models for faces, you can copy models previously learned from directory 'training/faces/models' to the current directory. Doing that, you can skip steps 1 and 2 below. Example to learn a new model for faces without retraining (about 1 minute): --------------------------------------------------------------------------- (positive samples from Caltech dataset, negative samples from INRIA dataset): -> 1st: learn the PLS model for the data: detectorPLS.exe -t -c ConfigLearnFaces.txt -n training/faces/negative/ -p training/faces/positive/ -m Learned.Faces -I 0 -M 5 -> 2nd: create a configuration file named ConfigFace.txt (used to execute face detection): As result of the previous step, you'll have a file called Learned.Faces.ret00.yml in the current directory. Create a file called Config.fd.Caltech.32x42.1s.txt with the following lines (don't forget the line with the last #): -- cut here --- # model scale # -- cut here --- -> 3nd: execute face detection: detectorPLS.exe -d -c Config.fd.Caltech.32x42.1s -i input/faces -o output/ -b -s Example to learn a new model for faces with retraining (about 20 minutes): -------------------------------------------------------------------------- -> 1st: learn the PLS model for the data: detectorPLS.exe -t -c ConfigLearnFaces.txt -n training/faces/negative/ -p training/faces/positive/ -m Learned.Faces -T 0.01 -I 1 -M 5 -R 3000 -> 2nd: create a configuration file named ConfigFace.txt: As result of the previous step, you'll have two files called Learned.Faces.ret??.yml in the current directory. Create a file called ConfigFace.Ret.txt with the following lines (don't forget the line with the last #): -- cut here --- # model scale # -- cut here --- -> 3nd: execute face detection: detectorPLS.exe -d -c ConfigFace.Ret.txt -i input/faces -o output/ -b -s