Vision-based Human Activity Recognition Using Local Phase Quantization
DOI:
https://doi.org/10.37965/jait.2024.0351Keywords:
LPQ, machine learning, SVM, texture-based, vision-based HARAbstract
Human activity recognition (HAR) has been the most active and interesting area of research in recent years due to its wide range of applications in the field, such as healthcare, security and surveillance, robotics, gaming, and entertainment. However, recognizing vision-based human activity is still a challenging as input sequences may have cluttered background, illumination conditions, occlusions, degradation of video quality, blurring, etc. In the literature, several state-of-the-art methods have been trained and tested on different datasets but have yet to perform adequately to a certain extent. Moreover, extracting potential features and combining appropriate methods is one of the most challenging tasks in realistic video. This paper proposes an efficient frequency-based blur invariance local phase quantization feature extractor and multiclass SVM classifier that overcomes these challenges. The feature is invariant toward camera motion, misfocused optics, movements in the scene, and environmental conditions. The proposed feature vector is then fed to the classifier to recognize human activities. The experiment has conducted on two publicly available datasets, UCF101 and HMDB51, and has achieved 99.79% and 98.67% accuracies, respectively. The approach has also outperformed the existing state-of-the-art approaches in terms of computational cost without compromising the accuracy of HAR.
Metrics
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.