an article by Sandhya Arora and Ananya Roy (Cummins College of Engineering for Women, Pune, India) published in International Journal of Business Intelligence and Data Mining Volume 13 Number 1/2/3 (2018)
Abstract
According to World Health Organization, over 5% of the world's population have hearing and speaking disabilities. The primary language of communication for people who are deaf and mute is the sign language. The proposed system aims to recognise the American Sign Language and converts it to text.
Input given to the system is an image of the hand depicting the necessary alphabet. The histogram of the input image is then computed and checked for similarity with the histograms of pre-saved images by using the Bhattacharyya Distance Metric.
Implementation of the system will be a small step in overcoming the social barrier of communication between the deaf-mute people and the people who do not understand sign language.
OpenCV is used as a tool for implementing proposed system.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment