Providing a hybrid method for face detection and gender recognition by a transfer learning and fine-tuning approach in deep convolutional neural networks and the Yolo algorithm

Document Type : Research Paper

Authors

Department of Electrical Engineering, Ahar Branch, Islamic Azad University, Ahar, Iran

Abstract

The present study aims to assess a combined method for face detection and gender recognition using deep learning by evaluating the shortcomings of face detection methods accurately. Deep learning algorithms can learn high-level features and have attracted a lot of attention for use in the field of machine vision. The present study names a hybrid method called Hyper-Yolo-face and utilizes a clear image using deep Convolution Neural Networks (CNNs), Yolo algorithm, and local binary patterns (LBPs) to identify the face and recognize the gender. Reducing the number of parameters is regarded as an extremely important challenge in deep networks in terms of memory consumption and the amount of computing in the network. The proposed method is based on the AlexNet model and generalization in the loss function of version 3 of the Yolo algorithm, which leads to improved precision. The present study focuses on applying small filters in transfer learning and fine-tuning network layers and using a new regression loss function in the Yolo algorithm to make it more appropriate for multiscale face detection. The face images are detected and cut by the presented Yolo in the proposed method. Then, an LBP operator is applied so that richer information and images enter the AlexNet network to estimate other parameters including gender recognition. Based on the experiments on the AFLW, FDDB, and PASCAL datasets, the proposed method improves recognition precision significantly.

Keywords

[1] Z. Cai and N. Vasconcelos, Cascade R-CNN: delving into high quality object detection, IEEE Conf. Comput. Vision Pattern, 2018, pp. 6154–6162.
[2] W. Chen, H. Huang, S. Peng, C. Zhou and C. Zhang, YOLO-face: a real-time face detector, Visual Comput. 37 (2020), no. 4, 805–813.
[3] Y. Chen, Y. Tai, X. Liu, C. Shen and J. Yang, FSRNet: end-to-end learning face super-resolution with facial priors, Proc. IEEE Conf. Comput. Vision Pattern Recog., 2018, pp. 2492–2501.
[4] M.F. Hansen, M.L. Smith, L.N. Smith, M.G. Salter and B. Grieve, Towards on-farm pig face recognition using convolutional neural networks, Comput. Ind. 98 (2018), 145–152.
[5] G.B. Huang, M. Ramesh, T. Berg and E. Learned-Miller, Labeledfaces in the wild: a database for studying face recognition in unconstrained environments, Workshop on Faces in’Real-Life’Images: Detection Alignment Recognition, 2008.
[6] N. Jain, S. Kumar, A. Kumar, P. Shamsolmoali and M. Zareapoor, Hybrid deep neural networks for face emotion recognition, Pattern Recognit. Lett. 115 (2018), 101–106.
[7] V. Jain and E. Learned-Miller, Fddb: A benchmark for face detection in unconstrained settings, UMass Amherst Tech. Rep. 2 (2010), no. 6.
[8] D.K. Jain, P. Shamsolmoali and P. Sehdev, Extended deep neural network for facial emotion recognition, Pattern Recog. Lett. 120 (2019), 69–74.
[9] M.R. Ju, H.B. Luo, Z.B. Wang, M. He, Z. Chang and B. Hui, Improved YOLO v3 algorithm and its application in small target detection, Acta Opt. Sin. 39 (2019), no. 7, 0715004.
[10] M. Kostinger, P. Wohlhart, P. Roth and H. Bischof, Annotated facial landmarks in the wild: a large-scale, realworld database for facial landmark localization, IEEE Int. Conf. Comput. Vision Workshops, 2011, pp. 2144–2151.
[11] A. Krizhevsky, I. Sutskever and G.E. Hinton, Imagenet classification with deep convolutional neural networks, In F. Pereira, C. Burges, L. Bottou and K. Weinberger (Eds.), Adv. Neural Inf. Process. Syst. 25 (2012), 1097–1105.
[12] N. Kumar, P.N. Belhumeur and S.K. Nayar, FaceTracer: a search engine for large collections of images with faces, Eur. Conf. Comput. Vision (ECCV), 2008, pp. 340–353.
[13] H. Li, Z. Lin, X. Shen, J. Brandt and G. Hua, A convolutional neural network cascade for face detection, In IEEE Conference on Computer Vision and Pattern Recognition, (2015), 5325–5334.
[14] Z. Liu, P. Luo, X. Wang and X. Tang, Deep learning face attributes in the wild, Proc. IEEE Int. Conf. Comput. Vision, 2015, pp. 3730–3738.
[15] T. Ojala, M. Pietik¨ainen and D. Harwood, Performance evaluation of texture measures with classification based on Kullback discrimination of distributions, Proc. 12th IAPR Int. Conf. Pattern Recog. (ICPR 1994), 1994, pp. 582–585.
[16] R. Ranjan, V.M. Patel and R. Chellappa, A deep pyramid deformable part model for face detection, Int. Conf. Biometrics Theory Appl. Syst. 2015, pp. 1–8.
[17] R. Ranjan, V.M. Patel and R. Chellappa, Hyperface: a deep multi-task learning framework for face detection,landmark localization, pose estimation, and gender recognition, IEEE Trans. Pattern Anal. Machine Intell. 41 (2016), no. 1, 121–135.
[18] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, You only look once: unified, real-time object detection, Proc. IEEE Conf. Comput. Vision Pattern Recog. 2016, pp. 779–788.
[19] H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid and S. Savarese, Generalized intersection over union: a metric and a loss for bounding box regression, Proc. IEEE/CVF Conf. Comput. Vision Pattern Recog., 2019, pp. 658–666.
[20] Y. Savani, C. White and N.S. Govindarajulu, Intra-processing methods for debiasing neural networks, Proc. Adv. Neural Inf. Process. Syst. 33 (2020), 2798–2810.
[21] O.R. Seryasat and J. Haddadnia, Assessment of a novel computer aided mass diagnosis system in mammograms, Biomed. Res. 28 (2017), no. 7, 3129–3135.
[22] O.R. Seryasat and J. Haddadnia, Evaluation of a new ensemble learning framework for mass classification in mammograms, Clinical Breast Cancer 18 (2018), no. 3, 407–420.
[23] O.R. Seryasat, J. Haddadnia and H. Ghayoumi Zadeh, Assessment of a novel computer aided mass diagnosis system in mammograms, Iran. Quart. J. Breast Disease 9 (2016), no. 3, 31–41.
[24] R. Soundararajan and S. Biswas, Machine vision quality assessment for robust face detection, Signal Process.: Image Commun. 72 (2019), 92–104.
[25] X. Sun, P. Wu and S.C. Hoi, Face detection using deep learning: an improved faster RCNN approach, Neurocomput. 299 (2018), 42–50.
[26] M.N.A. Tawhid and E.K. Dey, Gender recognition system from facial image, Int. J. Comput. Appl. 180 (2018), no.23, 5–14.
[27] M.J. Uddin, P.C. Barman, K.T. Ahmed, S.A. Rahim, A.R. Refat and M. Abdullah-Al-Imran, A convolutional neural network for real-time face detection and emotion & gender classification’s, SR J. Electr. Commun. Eng. (IOSR-JECE) 15 (2020), no. 3, 37–46.
[28] S. Yang, P. Luo, C.C. Loy and X. Tang, Faceness-net: face detection through deep facial part responses, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2018), no. 8, 1845–1859.
[29] E. Zangeneh, M. Rahmati and Y. Mohsen Zadeh, Low resolution face recognition using a two-branch deep convolutional neural network architecture, Expert Syst. Appl. 139 (2020), 112854.
[30] D. Zeng, F. Zhao, S. Ge and W. Shen, Fast cascade face detection with pyramid network, Pattern Recognit. Lett. 119 (2019), 180–186.
[31] N. Zhang, M. Paluri, M. Ranzato, T. Darrell and L. Bourdev, Panda: pose aligned networks for deep attribute modeling, IEEE Conf. Comput. Vision Pattern Recogn., 2014, pp. 1637–1644.
[32] M.M. Zhang, K. Shang and H. Wu, Learning deep discriminative face features by customized weighted constraint, Neurocomput. 332 (2019), 71–79.
[33] C. Zhang, X. Xu and D. Tu, Face detection using improved faster RCNN (2018), arXiv preprint arXiv:1802.02142, (2018).
[34] X. Zhao, X. Liang, C. Zhao, M. Tang and J. Wang, Real-time multi-scale face detector on embedded devices, Sensors 19 (2019), no. 9, 2158.landmark localization, pose estimation, and gender recognition, IEEE Trans. Pattern Anal. Machine Intell. 41 (2016), no. 1, 121–135.
Volume 14, Issue 1
January 2023
Pages 2373-2381
  • Receive Date: 22 April 2022
  • Revise Date: 26 May 2022
  • Accept Date: 06 June 2022