Interview VideoCall Analysis Using Computer Vision

  • Unique Paper ID: 195584
  • Volume: 12
  • Issue: 11
  • PageNo: 804-809
  • Abstract:
  • In this modern era, analyzing human behavior through videos is not a complex task, we have many advancements in computer vision and natural language processing (NLP) through which we can accomplish the desired task. This study presents a web based multimodal application that performs human behavior analysis based on multiple characteristics such as gaze tracking, emotion recognition, and sentimental speech analysis. The application uses OpenCV, Media Pipe, Deep Face, and NLP based sentimental analysis to process facial expressions, gaze direction in the eyes, and spoken content. Gaze estimation is performed using facial landmark detection, providing insights into the attentiveness and focus of a user. Emotion recognition is carried out using a deep learning facial analysis library which provides pre trained models for emotion detection, categorizing emotions such as happiness, sadness, anger, disgust, fear and surprise. Simultaneously, speech analysis identifies tone, sentiment, and linguistic patterns including the usage of filler words and pauses, to assess the amount of engagement and the overall confidence levels. The extracted behavioral features are processed and visualized through an interactive web interface, enabling real time feedback for applications in education, human computer interaction and psychological assessments. The proposed framework enhances traditional video analysis by integrating multiple behavioral indicators, providing a holistic comprehension of human engagement and emotional state. By synergistically combining visual and auditory cues, the system offers a robust evaluation of user behavior, which can be applied in remote learning environments, interview assessments, and human centric AI applications. The real time processing capabilities of the application guarantee seamless user interaction while preserving accuracy and reliability. Experimental results demonstrate the system’s efficacy in identifying behavioural patterns, rendering it a valuable resource for researchers and practitioners in cognitive science, affective computing, and auto mated interaction analysis. Future enhancements may encompass advanced deep learning architectures and expanded datasets to augment the accuracy of gaze estimation and sentiment classification, thereby further advancing the field of multi modal behavior recognition.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{195584,
        author = {R.V.S.Yaswanth Kumar and Rongali Poornima and S Kaja Mohiddin and P Deekshith},
        title = {Interview VideoCall Analysis Using Computer Vision},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {11},
        pages = {804-809},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=195584},
        abstract = {In this modern era, analyzing human behavior through videos is not a complex task, we have many advancements in computer vision and natural language processing (NLP) through which we can accomplish the desired task. This study presents a web based multimodal application that performs human behavior analysis based on multiple characteristics such as gaze tracking, emotion recognition, and sentimental speech analysis. The application uses OpenCV, Media Pipe, Deep Face, and NLP based sentimental analysis to process facial expressions, gaze direction in the eyes, and spoken content. Gaze estimation is performed using facial landmark detection, providing insights into the attentiveness and focus of a user. Emotion recognition is carried out using a deep learning facial analysis library which provides pre trained models for emotion detection, categorizing emotions such as happiness, sadness, anger, disgust, fear and surprise. Simultaneously, speech analysis identifies tone, sentiment, and linguistic patterns including the usage of filler words and pauses, to assess the amount of engagement and the overall confidence levels. The extracted behavioral features are processed and visualized through an interactive web interface, enabling real time feedback for applications in education, human computer interaction and psychological assessments.
The proposed framework enhances traditional video analysis by integrating multiple behavioral indicators, providing a holistic comprehension of human engagement and emotional state. By synergistically combining visual and auditory cues, the system offers a robust evaluation of user behavior, which can be applied in remote learning environments, interview assessments, and human centric AI applications. The real time processing capabilities of the application guarantee seamless user interaction while preserving accuracy and reliability. Experimental results demonstrate the system’s efficacy in identifying behavioural patterns, rendering it a valuable resource for researchers and practitioners in cognitive science, affective computing, and auto mated interaction analysis. Future enhancements may encompass advanced deep learning architectures and expanded datasets to augment the accuracy of gaze estimation and sentiment classification, thereby further advancing the field of multi modal behavior recognition.},
        keywords = {Emotion recognition, gaze detection, speech analysis, deep learning, Flask application, behavioral analysis, machine learning, natural language processing, video processing, engagement metrics.},
        month = {April},
        }

Cite This Article

Kumar, R., & Poornima, R., & Mohiddin, S. K., & Deekshith, P. (2026). Interview VideoCall Analysis Using Computer Vision. International Journal of Innovative Research in Technology (IJIRT), 12(11), 804–809.

Related Articles