Feature extraction for RGB-D cameras

Document Type : Research Paper

Authors

Electronic and Communication Engineering Department, University of Baghdad, Baghdad, Iraq

Abstract

A proposed feature extraction method for RGB-D cameras is developed.  The proposed method for feature extraction is based on a Histogram of oriented gradient HOG which is used to extract the features of RGB image alongside with Histogram of oriented depth HOD which extracts the depth features to find a new different feature vector that is better describe the image. The new feature extraction method is benchmarked by human action recognition of pause images and shows better performance than HOG and HOD.

Keywords

[1] E. Arıcan and T. Aydın, Object detection with RGB-D data using depth oriented gradients, B. Proc. Int. Conf. Eng. Nat. Sci. (2017) 876-–879.
[2] M. Awad and R. Khanna, Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers, Apress 2015.
[3] N.A. Bakr and J. Crowley, Histogram of oriented depth gradients for action recognition, arXiv preprint arXiv:1801.09477, (2018).
[4] Y.P. Cao, T. Ju, J. Xu and S.M. Hu, Extracting sharp features from RGB-D images, Comput. Graph. Forum 36(8) (2017) 138-–152.
[5] J. Chen, Y. Zhang and Y. Jiang, Multi-features fusion classification method for texture image, J. Eng. 2019(23) (2019) 8834-–8838.
[6] B. Fernando, E. Fromont, D. Muselet and M. Sebban, Discriminative feature fusion for image classification, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (2012) 3434—3441.
[7] I. Guyon, A scaling law for the validation-set training-set size ratio, AT&T Bell Lab. 1(11) (1997) 1—11.
[8] M.I. Khedher, M.A. El-Yacoubi and B. Dorizzi, Human action recognition using continuous hmms and hog/hof silhouette representation, ICPRAM 2012 - Proc. 1st Int. Conf. Pattern Recognit. Appl. Methods, 2 (2012) 503-508.
[9] C.Q. Lai and S.S. Teoh, An efficient method of HOG feature extraction using selective histogram bin and PCA feature reduction, Adv. Electr. Comput. Eng. 16(4) (2016) 101-–108.
[10] F. Louren¸co and H. Araujo, Intel real sense SR305, D415 and L515: Experimental evaluation and comparison of depth estimation, VISIGRAPP 4(VISAPP) (2021) 362–369.
[11] D.U.C. Pati, P.K. Dutta and A. Barua, Feature detection of an object by image fusion, Int. J. Comput. Appl. 1(17) (2010) 59-–67.
[12] T. Surasak, I. Takahiro, C.H. Cheng, C.E. Wang and P.Y. Sheng, Histogram of oriented gradients for human detection in video, Proc. 2018 5th Int. Conf. Bus. Ind. Res. Smart Technol. Next Gener. Info. Eng. Bus. Soc. Sci. ICBIR 2018 (2018) 172-–176.
[13] V. Tadic, A. Odry, I. Kecskes, E. Burkus, Z. Kiraly and P. Odry, Application of intel real sense cameras for depth image generation in robotics, Wseas Trans. Comput. 18 (2019) 2224–2872.
[14] U. Thakur, S. Rai and S.K. Sahu, A study an image fusion for the pixel level and feature-based techniques, Adv. Comput. Sci. Technol. 10(10) (2017) 3047—3055.
[15] Y. Wang, B. Song, P. Zhang, N. Xin and G. Cao, A fast feature fusion algorithm in image classification for cyber-physical systems, IEEE Access 5 (2017) 9089—9098.
Volume 13, Issue 1
March 2022
Pages 3991-3995
  • Receive Date: 07 November 2021
  • Revise Date: 25 December 2021
  • Accept Date: 10 January 2022