Lumos is a hand gesture controlled light system where users can perform various hand gestures to control a set of lights. This is mostly based on bedridden people and mute individuals. The program uses machine learning and computer vision technology to recognize and classify the hand gestures. Technologies used were media pipe ,OpenCV and to host the model ,then uploaded it to a web server for easy access. This is very helpful towards bedridden people who are unable to try traditional methods such as wall mounted light switches, also office workers mostly small businesses and especially gamers who need convenience and ease.
The application is allowing users to create their own profiles and customize the actions associated with specific hand gestures. This enables easy and convenient control of lighting in any environment, making it an ideal solution for everyday use.
The project's solution has a mobile app. That the user can create an account and log into that using email and a password. So, the team ended up using Flutter to develop the front-end user interface. The team used the android studio as the code editor and used dart programming language for coding. The main feature that the team has added is customizing hand gestures. For that, the team used creative hand gesture images to attract the user. And other features like changing passwords, settings menu, changing profile picture, changing name and email were added to the solution also.
Firebase was used for the app's backend. It was used to connect between log in and sign-up pages. The team used firebase because it has features like a Realtime database, Security authentication, Firebase hosting, Performance Monitoring, and many more.
Behind the scenes, LUMOS integrates advanced machine learning models, including OpenCV and MediaPipe, for accurate gesture recognition and data model training.
The project team is going to do the backend part using machine learning and python. And the computer vision will be used to track the hand gestures. And the project team is going to use an Arduino board to connect all these things. The computer vision part of the system is responsible for detecting and tracking the hand gestures of the user in real-time. This involves capturing video frames from a camera and processing them to extract information about the hand gestures. The computer vision algorithms used for this purpose can include techniques such as hand detection, hand tracking, and gesture recognition.
The machine learning part of the system involves training a model to recognize the hand gestures of the user. This involves c ollecting a dataset of hand gestures and their corresponding labels, and then training a machine learning model on this dataset. Once the model is trained, it can be integrated into the computer vision part of the system to recognize the user's hand gestures in real-time.
Used tech stacks like Git for version control and ClickUp for project management. For production CI/CD we used GitHub Actions.
Dart, Flutter, Firebase, Arduino, MediaPipe, OpenCV, GitHub
Start Date: October 1, 2022
End Date: March 30, 2023
For more reading, refer to the Documentation and the Github Repository...
Documentation