KrishnaVision: A Multimodal Virtual Interface Combining MediaPipe-Hands Optimization and Gemini AI for Context- Aware HCI

  • Unique Paper ID: 180711
  • Volume: 12
  • Issue: 1
  • PageNo: 2909-2917
  • Abstract:
  • This work introduces Krishna Vision, an innovative virtual mouse system that is synergistically bringing together MediaPipe's hand tracking and Gemini's multimodal AI to develop a human-computer interface that can adapt. Our system introduces three primary innovations: [1] Velocity-damped cursor control with a 63% jitter reduction by derivative-based momentum modeling, [2] Gemini-driven contextual command resolution with environment-sensing gesture sensitivity control, and [3] Dynamic input modality prioritization through real-time confidence-scoring hybrid state machines. Results from benchmarks achieve 97.3% accuracy for gesture recognition at 22ms latency while surpassing ResNet-50 baselines by 15.2% with 41% reduced power usage. Gemini integration of the system provides new functionalities such as screenshot description (89.3% success) and inter-application memory, filling an important contextual awareness gap seen in current solutions. Large-scale user studies involving 45 users under varied lighting/noise conditions ensure the robustness of the approach, demonstrating 91.5% success on complicated hybrid commands.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{180711,
        author = {Rupkatha De and Aritro Saha},
        title = {KrishnaVision: A Multimodal Virtual Interface  Combining MediaPipe-Hands Optimization and Gemini  AI for Context- Aware HCI},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {1},
        pages = {2909-2917},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=180711},
        abstract = {This work introduces Krishna Vision, an innovative virtual mouse system that is synergistically bringing together MediaPipe's hand tracking and Gemini's multimodal AI to develop a human-computer interface that can adapt. Our system introduces three primary innovations: [1] Velocity-damped cursor control with a 63% jitter reduction by derivative-based momentum modeling, [2] Gemini-driven contextual command resolution with environment-sensing gesture sensitivity control, and [3] Dynamic input modality prioritization through real-time confidence-scoring hybrid state machines. Results from benchmarks achieve 97.3% accuracy for gesture recognition at 22ms latency while surpassing ResNet-50 baselines by 15.2% with 41% reduced power usage. Gemini integration of the system provides new functionalities such as screenshot description (89.3% success) and inter-application memory, filling an important contextual awareness gap seen in current solutions. Large-scale user studies involving 45 users under varied lighting/noise conditions ensure the robustness of the approach, demonstrating 91.5% success on complicated hybrid commands.},
        keywords = {Adaptive HCI, MediaPipe Optimization, Multimodal Fusion, Gemini AI, Gesture-Voice Integration},
        month = {June},
        }

Related Articles