Face and facial expression recognition using local directional feature structure

Document Type : Research Paper

Authors

1 Dept. of ISE, Dr. Ambedkar Institute of Technology, Bengaluru, India

2 Dept. of CSE, Shri Krishna Institute of Technology, Bengaluru, India

Abstract

The face expression recognition is used in countless application areas for example security, computer vision and medical science etc. The facial expressions are used to communicate in a non-verbal way (i.e., using eye contact, facial expressions etc.). Emotions play an important role in facial expression recognition which helps to identify what an individual is feeling. Various AI research in the field of facial recognition system is being carried since a decade. Many of the machine learning algorithms are also being used to identify the facial expression which helps them to train and test using the facial expression to get a correct output of the given expression. This paper presents a new facial expression recognition system, local directional feature structure (LDFS). LDFS uses different features of the face (i.e., eyebrows, nose, mouth, eyes). The face is detected and aligned using the edge detection. The task of the edge detection is to detect the face, face alignment and position variations of the face. The edge detection extracts the specific features for the identification of the emotions. Two types of datasets have been used for the qualitative and quantitative experiments on the face expression mainly the CK+ and Jaffe dataset. This approach for our model shows an improvement when compared to the existing system.

Keywords

[1] B. Allaert, I.M. Bilasco and C. Djeraba, Micro and macro facial expression recognition using advanced local motion patterns, IEEE (2019) 1–12.
[2] X. Ben, Y. Ren, J. Zhang, S.-J. Wang, K. Kpalma, W. Meng and Y.-J. Liu, Video-based facial micro-expression analysis: A survey of datasets, features and algorithms, IEEE Trans. Pattern Anal. Mach. Intell. (2021).
[3] F.-J. Chang, A.T. Tran, T. Hassner, I. Masi, R. Nevatia and G. Medioni, ExpNet: landmark-free, deep, 3D facial expressions, Arxiv, (2018) 1–8.
[4] Y. Fan, V. Li and J.C.K. Lam, Facial expression recognition with deeply-supervised attention network, IEEE Trans. Affect. Comput. 2020 (2020).
[5] M.T.B. Iqbal, B. Ryu, A.R. Rivera, F. Makhmudkhujaev, O. Chae and S.-H. Bae, Facial expression recognition with active local shape pattern and learned-size bloc representations, IEEE Trans. Affect. Comput. (2020) 1–15.
[6] M.R. Koujan, L. Alharbawee, G. Giannakakis, N. Pegeault and A. Roussos, Real-time facial expression recognition in the wild by disentangling 3D expression from identity, Arxiv (2020) 1–8.
[7] S. Li and W. Deng, Deep facial expression recognition: A survey, IEEE Trans. Affect. Comput. (2020).
[8] N. Mehandale, Facial emotion recognition using convolution neural networks, Springer, SN Appl.Sci. (2020) 1–8.
[9] U. Mlakar, I. Fister, J. Brest and B. Potocnik, Multi-objective differential evolution for feature selection in facial expression recognition systems, Expert Syst. 89 (2017) 129–137.
[10] H.-D. Nguyen, S.-H. Kim, G.-S. Lee, H.-J. Yang, I.-S. Na and S.-H. Kim, Facial expression recognition using a temporal ensemble of multi-level convolutional neural network, IEEE Trans. Affect. Comput. (2019) 1–12.
[11] A.R. Rivera, J.R. Castillo, O. Chae, Local directional number pattern for face analysis: face and expression recognition, IEEE Trans. Image Proces. 22(5) (2013) 1740–1752.
[12] B. Ryu, A.R. Rivera, J. Kim, O. Chae, Local directional ternary pattern for facial expresion recognition, IEEE 26(12) (2017) 6006–6018.
[13] Y. Xia, W. Zheng, Y. Wang, H. Yu, J. Dong and F.-Y. Wang, Local and global perception generative adversarial network for facial expression synthesis, IEEE Trans. Circ. Syst. Video Tech. (2021).
[14] S. Xie, H. Hu and Y. Chen, Facial expression recognition with two-branch disentangled generative adversarial network, IEEE Trans. Circ. Syst. Video Tech. (2020) 2359–2371.
[15] Y. Yaddadena, M. Adda, A. Bouzouanea, S. Gabourya and B. Bouchard, User action and facial expression recognition for error detection system in an ambient assisted environment, Expert Syst. Appl. 112 (2018) 173–189.
[16] J. Yan, G. Lu, X. Li, W. Zheng, C. Huang, Z. Cui, Y. Zong, M. Chen, Qi. Hao, Y. Liu, J. Zhu and H. Li, FENP: A database of neonatal facial expression for pain analysis, IEEE Trans. Affect. Comput. 2020 (2020).
[17] H. Zhang, W. Su, J. Yu and Z. Wang, Identity-expression dual branch network for facial expression recognition, IEEE Trans. Cogn. Develop. Syst. 2020 (2020).
[18] F. Zhang, M. Xu and C. Xu, Weakly-supervised facial expression recognition in the wild with noisy data, IEEE Trans. Multimedia 2021 (2021).
[19] X. Zhang, F. Zhanga and C. Xu, Joint expression synthesis and representation learning for facial expression recognition, IEEE Trans. Circ. Syst. Video Tech. 2021 (2021).
Volume 13, Issue 1
March 2022
Pages 1067-1079
  • Receive Date: 08 May 2021
  • Revise Date: 14 June 2021
  • Accept Date: 30 June 2021