Towards Autonomous Training of Scene-Specific Pedestrian Detectors in Visual Surveillance Environments
2019-03-26T01:50:17Z (GMT) by
Pedestrian detection is of paramount importance for intelligent visual surveillance of people. Despite over two decades of extensive research, the performance of generic pedestrian detectors remains prone to the dataset shift phenomenon, whereby target surveillance scenes differ significantly from the training data used to generate the detector. Numerous scene-specific approaches have been developed, but they are limited by their requirement for manual labeling or dependence on prior models. This research focuses on the development of a virtually autonomous training (VAT) framework that trains scene-specific pedestrian detectors without requiring any manual labeling, nor utilizing any prior model or training data.