Modeling Mutual Visibility Relationship in Pedestrian Detection
Pedestrians with overlaps are difficult to detect, however, we observe that these pedestrians have useful mutual visibility relationship information. When pedestrians are found to overlap in the image region, there are two types of mutual visibility relationships among their parts:
If you use our codes or dataset, please cite the following papers:
(a) The mutual visibility deep model used for inference and fine tuning parameters
(b) the detailed connection and parts model for pedestrian 1.
(a) Two rectangular regions used for approximating the pedestrian region
(b) an example with left-head-shoulder, lefttorso and left-leg overlapping with the pedestrian regions of the left person.
Examples of correlation between different parts learned from the deep model
Experimental results on the Caltech-Train dataset for HOG, LatSVM-V2, FPDW, D-Isol and our mutual visibility approach, i.e. D-Mut
Experimental results on the ETH dataset for HOG, LatSVM-V2, FPDW, D-Isol and our mutual visibility approach, i.e. D-Mut
Experimental results on the CaltechTest dataset for HOG, LatSVM-V2, FPDW, D-Isol and our mutual visibility approach, i.e. D-Mut
Experimental results on the PETS2009 dataset for LatSVM-V2, FPDW, D-Isol and our mutual visibility approach, i.e. D-Mut
Detection results comparison of D-Isol and D-Mut on the Caltech-Train dataset and the ETH dataset. All results are obtained at 1 FPPI.
Experimental results on detecting Isolated pedestrians (left) and Overlapped pedestrians (right) on the Caltech-Train dataset.