Smart gesture control for mobile phone using machine learning

Published on . Written by

Smart gesture control for mobile phone using image processing
It is safe to say that we are a mobile generation. It is hard to find anyone nowadays who doesn’t have a smartphone. It has become so common that a few years ago, the number of people who access the internet on their phones, eclipsed the number of people who do so on their desktops. That is the pace with which we have imbibed and accepted smartphones. As technology develops, we are getting smarter and more intelligent devices, which seem to know exactly what we want. One such trend that quickly caught on is that of smart gestures that tell the device what to do. In this Image Processing project, we will be taking a look at an application that allows a device to read and understand smart gestures.

Read more..
Project Description


Skyfi Labs Projects
Mobile phones and tablets have taken over the world, and help us interact with each other seamlessly. They also help people who might not be able to use their hands and legs the way others can. Several no-hands and other such features are being added to mobile phones to further increase their accessibility. Smart gestures have been added to such devices as a means to save time, and handle tasks more efficient. For example, several phones have an option wherein by waving our hands across the screen, we can scroll through the gallery, or skip to the next page in a document. Have you ever wondered how that works? Well, this project aims to unravel the mysteries behind smart gestures. In this Image processing project, we will be building an application that reads, senses and analyses hand gestures to then actuate something in the device accordingly.

Concepts Used

  • Fundamentals of Image Processing
  • Computer Vision Techniques
  • Token extraction
  • Data Segmentation
  • Basics of Android programming
  • Object Detection
  • Motion sensing algorithms
Project Implementation

  1. Out of all the gestures we use, pointing is one of the most important ones when it comes to image processing. Not only do they help in intuitively selecting objects, but they also help in changing direction and setting a target.
  2. Hence, by applying a combination of such actions, we can mimic verbal statements through gestures.
  3. The first step is accessing the video captured by the phone’s camera. It is by studying this video feed that we can learn the gestures.
  4. The feed captured is pre-processed to make it suitable for data extraction and to remove unwanted noise.
  5. For preparing the test dataset, gestures are categorized into four- conversational, controlling, manipulative and communicative.
  6. Conversational gestures help in expressing ourselves while controlling gestures are mainly used to navigate through the virtual environment,
  7. Manipulative gestures are a way for us to interact with virtual objects, and sign language is the basis for communicative gestures.
  8. Once we have a proper dataset, we can start extracting images from the video feed.
  9. The extracted images are put through Principal Component Analysis and matched with gestures from the prescribed dataset.
  10. This allows the program to derive some physical meaning from the gesture as it correlates it to a given command within the system.
  11. The system then checks for the corresponding logical command it must execute when it faces the gesture.
  12. Hence, in most ways, gesture identification works the same as face recognition, as it is a classification problem.
Kit required to develop Smart gesture control for mobile phone using machine learning:
Technologies you will learn by working on Smart gesture control for mobile phone using machine learning:


Any Questions?


Subscribe for more project ideas