Robust Face Detection and Identification under Occlusion using MTCNN and RESNET50
DOI:
https://doi.org/10.30537/sjet.v7i2.1499Keywords:
Face recognition, Partially occluded faces, MTCNN, ResNet50, Face Biometric, Face detectionAbstract
In today's rapidly evolving world, where technology is progressing swiftly, there is an increasing demand for facial recognition systems. Technologies are similar to digital forensics in that they can recognize people by scanning faces. However, one key problem they confront is dealing with covered or occluded faces, which might restrict recognition of faces in real-world situations. To overcome this issue, we created a system that is capable of identifying individuals even when their faces are veiled. We used the face detector algorithm called Multi-Task Cascaded Convolutional Neural Network (MTCNN) for face detection with 99.8% accuracy. Further we have conducted feature extraction and pre-processing on our self-created dataset. Our project utilizes the power of deep learning model: Residual Network (ResNet50), the form of deep neural network architectures well-suited for the job of features extraction. These features are matched by using Cosine similarity with accuracy of 92%. By leveraging the capabilities in the deep learning algorithms, this project provides a robust solution for automating the recognition of partially occluded faces.
Downloads
References
A. M. P, S. N, and M. K, “Face Recognition using CNN: A Systematic Review,” International Journal of Engineering Research & Technology, vol. 11, no. 6, Jun. 2022, doi: 10.17577/IJERTV11IS060075.
D. Zeng, R. Veldhuis, and L. Spreeuwers, “A survey of face recognition techniques under occlusion,” IET Biometrics, vol. 10, no. 6, pp. 581–606, 2021, doi: 10.1049/bme2.12029.
C. Gopalan and Ms. S. Suchitra, “A Survey on Face Dtection,” International journal of recent advances in engineering and technologies, published by IRD, vol. 1, pp. 113–118, Jan. 2013.
M. Oloyede, G. Hancke, and H. Myburgh, “A review on face recognition systems: recent approaches and challenges,” Multimedia Tools and Applications, vol. 79, Oct. 2020, doi: 10.1007/s11042-020-09261-2.
M. K. Meena and H. K. Meena, “A Literature Survey of Face Recognition Under Different Occlusion Conditions,” in 2022 IEEE Region 10 Symposium (TENSYMP), Jul. 2022, pp. 1–6. doi: 10.1109/TENSYMP54529.2022.9864502.
H. Moqbel and M. Parameswaran, “Occluded Facial Recognition for Surviellance Using Deep Learning,” in 2022 8th International Conference on Virtual Reality (ICVR), Nanjing, China: IEEE, May 2022, pp. 459–466. doi: 10.1109/ICVR55215.2022.9848162.
X. Sun, P. Wu, and S. C. H. Hoi, “Face detection using deep learning: An improved faster RCNN approach,” Neurocomputing, vol. 299, pp. 42–50, Jul. 2018, doi: 10.1016/j.neucom.2018.03.030.
“(PDF) Face Recognition from a Single Image Per Person: A Survey.” Accessed: Sep. 10, 2023. [Online]. Available: https://www.researchgate.net/publication/222672070_Face_Recognition_from_a_Single_Image_Per_Person_A_Survey
B. Amos, B. Ludwiczuk, and M. Satyanarayanan, “OpenFace: A general-purpose face recognition library with mobile applications”.
Q. Jin, C. Mu, L. Tian, and F. Ran, “A Region Generation based Model for Occluded Face Detection,” Procedia Computer Science, vol. 174, pp. 454–462, Jan. 2020, doi: 10.1016/j.procs.2020.06.114.
A. Alzu’bi, F. Albalas, T. AL-Hadhrami, L. B. Younis, and A. Bashayreh, “Masked Face Recognition Using Deep Learning: A Review,” Electronics, vol. 10, no. 21, Art. no. 21, Jan. 2021, doi: 10.3390/electronics10212666.
Y. Li, B. Sun, T. Wu, and Y. Wang, “Face Detection with End-to-End Integration of a ConvNet and a 3D Model,” Aug. 29, 2016, arXiv: arXiv:1606.00850. doi: 10.48550/arXiv.1606.00850.
H. Jiang and E. Learned-Miller, “Face Detection with the Faster R-CNN,” Jun. 10, 2016, arXiv: arXiv:1606.03473. doi: 10.48550/arXiv.1606.03473.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Jan. 06, 2016, arXiv: arXiv:1506.01497. doi: 10.48550/arXiv.1506.01497.
H. Qin, J. Yan, X. Li, and X. Hu, “Joint Training of Cascaded CNN for Face Detection”.
S. Wan, Z. Chen, T. Zhang, B. Zhang, and K. Wong, “Bootstrapping Face Detection with Hard Negative Examples,” Aug. 07, 2016, arXiv: arXiv:1608.02236. doi: 10.48550/arXiv.1608.02236.
“Wang: Masked face recognition dataset and application - Google Scholar.” Accessed: Sep. 10, 2023. [Online]. Available: https://scholar.google.com/scholar_lookup?title=Masked+face+recognition+dataset+and+application&author=Wang,+Z.&author=Wang,+G.&author=Huang,+B.&author=Xiong,+Z.&author=Hong,+Q.&author=Wu,+H.&author=Yi,+P.&author=Jiang,+K.&author=Wang,+N.&author=Pei,+Y.&publication_year=2020&journal=arXiv
H. Al-Dmour, A. Tareef, A. M. Alkalbani, A. Hammouri, and B. Alrahmani, “Masked Face Detection and Recognition System Based on Deep Learning Algorithms,” JAIT, vol. 14, no. 2, pp. 224–232, 2023, doi: 10.12720/jait.14.2.224-232.
A. Bansal, A. Nanduri, C. Castillo, R. Ranjan, and R. Chellappa, “UMDFaces: An Annotated Face Dataset for Training Deep Networks,” May 21, 2017, arXiv: arXiv:1611.01484. doi: 10.48550/arXiv.1611.01484.
J. Deng, J. Guo, Y. Zhou, J. Yu, I. Kotsia, and S. Zafeiriou, “RetinaFace: Single-stage Dense Face Localisation in the Wild,” May 04, 2019, arXiv: arXiv:1905.00641. doi: 10.48550/arXiv.1905.00641.
B. Houshmand and N. Khan, “Facial Expression Recognition Under Partial Occlusion from Virtual Reality Headsets based on Transfer Learning,” Aug. 12, 2020, arXiv: arXiv:2008.05563. doi: 10.48550/arXiv.2008.05563.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Sukkur IBA Journal of Emerging Technologies

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The SJET holds the rights of all the published papers. Authors are required to transfer copyrights to journal to make sure that the paper is solely published in SJET, however, authors and readers can freely read, download, copy, distribute, print, search, or link to the full texts of its articles and to use them for any other lawful purpose.

The SJET is licensed under Creative Commons Attribution-NonCommercial 4.0 International License.








