AI Virtual Control

The Project

This project is a hand gesture recognition and control system. It uses a webcam to capture video of the user's hands, and then uses machine learning to identify the hand gestures being made. Once a gesture is recognized, the program can perform different actions on the computer, such as moving the mouse cursor, clicking, or scrolling. The program uses two main libraries: OpenCV (cv2) for computer vision tasks and MediaPipe (mp) for machine learning tasks. Here's a breakdown of the code: Import libraries: The code starts by importing the necessary libraries like OpenCV, MediaPipe, PyAutoGUI, and others. Define gestures: The code defines a class called Gest which represents different hand gestures that can be recognized. Each gesture is assigned a numerical value for easy identification. Define hand labels: The code defines a class named HLabel to represent the handedness of the detected hand (left or right). Hand recognition: The HandRecog class handles recognizing gestures from the hand landmarks detected by MediaPipe. It takes the hand label (left or right) as input during initialization. It has methods to update the hand result based on the latest frame from the webcam, calculate finger states based on landmark locations, and finally identify the current gesture based on the finger states. Control actions: The Controller class contains functions to perform various actions on the computer based on the detected gestures. It can move the mouse cursor, perform clicks, control scroll, and even adjust system volume and brightness using external libraries. Main Class - GestureController: This class is the entry point of the program. It handles initializing the webcam, capturing video frames, processing them using MediaPipe to detect hands and gestures, and then calling the appropriate control functions from the Controller class based on the recognized gesture. Cursor Control: Move the cursor on the screen by tracking the hand (usually wrist). Clicking: Single click, double click, and right-click based on specific gestures. Dragging: Hold a click and drag the cursor across the screen. Scrolling: Scroll up/down or horizontally by pinching two fingers together. Volume Control: Adjust system volume by pinching fingers together and moving them up/down. Brightness Control: Adjust screen brightness by pinching fingers together and moving them up/down Overall, this project demonstrates how to use computer vision and machine learning to create a system that can interact with the computer using hand gestures.

Advanced
Art
Fun
Community

Team Comments

I chose to make this project because...

It will enhance user experience and give them a futuristic feel and also help some who have difficulties oprating their mouse. It would help people operate their computers from a distance, such as presenters. If it's developed further, it could also give people a very natural feel in VR experiences.

What I found difficult and how I worked it out

Balancing gesture recognition accuracy with smooth cursor control was tricky! I experimented with different filtering techniques to dampen jittery movements and achieved better results by adjusting the weighting based on distance changes

Next time, I would...

I'd definitely explore multi-hand gestures for more complex interactions! Imagine using two hands to manipulate 3D objects or perform more nuanced controls. Also, voice integration alongside gestures would be powerful - like saying "copy" while making a pinching motion.

About the team

  • India

Team members

  • Divy