AI-Based Gesture Polling System - Smart Voting Using ML in PictoBlox
by rohinivijaya in Workshop > Science
25 Views, 0 Favorites, 0 Comments
AI-Based Gesture Polling System - Smart Voting Using ML in PictoBlox
The AI-Based Gesture Polling System is an interactive voting project that uses Artificial Intelligence and the Machine Learning Environment in PictoBlox to recognize hand gestures and record votes automatically.
Using the ML Environment, the model is trained with different hand gesture samples under three classes — Agree, Neutral, and Disagree. Once the training is completed, the system detects gestures in real time through the webcam and automatically records the corresponding votes.
This project demonstrates how Machine Learning can be used to create smart, contactless, and interactive polling systems for classrooms, workshops, seminars, and events.
Supplies
- Pictoblox Software: Download PictoBlox | Windows, MacOS, Linux, Chromebook, Android & iOS
- Laptop/PC
- Inbuilt or External Camera Setup
- Speaker or Headphones
Setting Up the ML Model
The project uses the Machine Learning Environment in PictoBlox to train a hand gesture recognition model for polling. The model is trained to identify three different gesture classes: Agree, Neutral, and Disagree. The trained ML model is later connected to the block coding section for real-time gesture detection and vote counting.
- Open PictoBlox and go to the Machine Learning Environment.
- Click on “Create New Project”.
- Choose “Hand Pose Classifier”.
- Add three classes and rename them as:
- Agree
- Neutral
- Disagree
- Upload images or use the live webcam to capture gesture samples for each class using the following hand postures:
- Thumbs Up → Agree
- Open Palm → Neutral
- Thumbs Down → Disagree
- Capture multiple images from different angles and positions to improve prediction accuracy.
- After collecting the images, click on “Train the Model”.
- Once the training is completed, test the model using live hand gestures and check the prediction accuracy.
- Finally, export the trained model into the Block Coding Environment for the programming and polling implementation part.
Block Code
- Add the Text-to-Speech Extension - Add the Text-to-Speech extension from the extensions panel. This extension is used to provide voice feedback and announce whether the detected gesture is Agree, Neutral, or Disagree.Since the project is opened directly from the ML Environment, the Machine Learning extension is already added automatically.
- The program begins when the green flag is clicked. This starts the AI polling system and activates all the required settings for gesture recognition and vote counting.
- All vote-counting variables such as Agree, Neutral, and Disagree are reset to zero at the start of the program. The Winner variable is hidden initially to ensure polling starts with fresh results.
- A forever loop continuously analyzes images from the webcam using the trained Machine Learning model. The system keeps detecting hand gestures in real time without stopping.
- The detected hand keypoints are displayed on the screen. These keypoints help visualize how the AI model tracks and recognizes hand gestures.
- When the ML model identifies the Agree gesture, the Agree vote count increases automatically. The system displays the updated vote count and provides audio confirmation that the vote has been recorded, Same for Disagree and Neutral.
- After recording a vote, the system waits for a few seconds before detecting another gesture. This prevents repeated voting from the same gesture.
Final Result Declaration Code Explanation
- The result declaration process begins when the Space Key is pressed. This triggers the program to stop regular polling and start comparing the total votes collected for Agree, Neutral, and Disagree.
- At the beginning of the code, the Winner variable is reset and hidden. This ensures that the previous result is cleared before calculating the new polling result.
- A forever loop continuously checks and compares the vote counts of all three categories:
- Agree
- Neutral
- Disagree
- The system uses conditional logic blocks to determine which category has received the highest number of votes.
- The system checks whether the Agree votes are greater than Neutral and Disagree votes. If true, the Winner variable is set to “Agree” and the result is displayed on the screen with the total votes.
- If Agree is not the winner, the system checks whether Neutral votes are greater than Agree and Disagree votes. If true, the Winner variable is set to “Neutral” and the result is displayed.
- If Neutral is also not the winner, the system checks whether Disagree votes are greater than Agree and Neutral votes. If true, the Winner variable is set to “Disagree” and the final result is displayed.
- If two or more categories have equal votes, the system declares the result as “Poll is Tie” and displays the message on the screen.
Results
The system successfully detected trained gestures and recorded votes accurately in real time. It displayed total votes and announced the winning category automatically. The system provided quick, interactive, and contactless polling.
Applications
- Classroom quizzes and feedback
- Workshops and seminars
- Interactive training sessions
- Smart meeting decision-making
- STEM and AI demonstrations
Conclusion
The AI-Based Gesture Polling System demonstrates how artificial intelligence can improve traditional voting methods. By integrating gesture recognition with real-time vote counting, the system ensures fast, accurate, and interactive polling. This project highlights the practical application of AI in education and decision-making environments.