Abstract
Due to the relative lack of vast use of Sign language within our society, deaf and other people which are verbally -challenged and face difficulties in daily communication among there societies.
Our aim is to develop a mobile application which will translate Sign language into Verbal language and vice versa because mobiles are handy devices these days and camera use is very common among people. We will develop the application using computer vision, gesture recognition, Image processing, sign language , voice recognition. We are aiming to implement the application successfully and except the experimental results to be almost 97 percent accurate.
Save your time!
We can take care of your essay
- Proper editing and formatting
- Free revision, title page, and bibliography
- Flexible prices and money-back guarantee
Place an order
Introduction
Sign language is a form of hand gestures which involves visual motions and signs which are used as a system of communication for the deaf and verbally-challenged community. However very few people are able to understand sign language. This makes a real barrier of communication among the society of normal people and the society of deaf people and we want to eliminate this problem fully.
There are two possible solutions of the translating application one is contact-based in which user will contact physically with the sensing device. Other approach is vision based which uses collected images or videos frames using camera as the input for the system. Vision based approach is often preferred over contact based approach because it does not involves any other hardware which will be not convenient for the user but in vision base approach the hardware used in the application is camera only which is always present in the mobiles today But everything in this method is not good .The main and major challenges faced by visual based approach is that the accuracy of the result depends on many conditions i.e. light conditions, size of hands etc.
Sign language recognition generally involves some phases of process which are named as segmentation, feature extraction and classification the main objective of Segmentation phase is to eradicate the back ground and noises, leaving the ROI (region of interest) which the useful information for us further in Extraction phase the features of the ROI are extracted the feature important in ROI are curvatures, edges, shapes, corners, moments, textures, colours or others. Now the features extracted will undergo classification phase where the features will be grouped accordingly, and this will be stored in the database so that we can match new gestures of the users we will use SURF algorithm instead of SIFT to extract key points with BoF (bag of feature) model
Motivation & Scope
Sign language is natural source to convey your expressions to others. By birth human being tries to express their feelings, emotions by making gestures, facial expressions, moving hands etc. but for some people who cannot speak or hear, sign is the only way to communicate with others Now this creates a communication gap between mute people and normal human beings. So, they learn sign language to fill this gap but the problem is not everybody understands the sign language that’s why we are making an mobile application which will be able to translate signs to normal high level language.
Different countries have different languages. Hence, sign language varies from region to region our system will analyse the input data which will be in the form of images or frames stored in database and generate corresponding results.
This project is focusing to help the hearing and speaking impaired people by developing the application. This system will clear the gap between mute people and normal people which in turns help mostly the special children so t hat they will be able to express their feelings endlessly without any hesitation.
Related Work
In previous work many researchers have been working in making ways to translate human gestures to high level language like English, French, Urdu etc.
Pansare et al. [1] performed ISL (Indian sign language) recognition by applying median and gaussian filters to remove noise before using further operations. Edge detection method was used to detect edges of ROI, Euclidian distance is calculated for classification of 26 ISL with 100 samples each and achieved average accuracy of 90%.
Rekha et al. [2] proposed recognition of ISL by first segmenting the hand area by using skin colour in YCbCr colour space. Then features are then extracted using principle Curvature Based Region (PCBR) detector, Wavelet packet Decomposition and complexity defects method. The accuracy was 91.3 %.
Dardas and Georganas [3] proposed a framework of recognizing hand gesture in real time using Bag-of Feature (BoF) and Support Vector Machine (SVM). Skin colour segmentation method is used to segment the face and hand region from the background in HSV colour space, Viola and Jones algorithm is then used to remove face region. Shift-Invariant Feature Transform (SIFT) algorithm is employed to extract key points to detect hand region which were first quantized using K-means clustering and mapped into BoF. The accuracy was 96.23%
Despite many works have been done on Computer platform, little has been done on mobile platform. These days smartphones have become so powerful that their computational power matches the power of Computers. Making a system of sign language recognition on mobile platform will make it easy to use and will portable for the user. We are making a system on established algorithms to show that gesture language can be performed in real-time with high accuracy.
Papers cited
- J.R. Pansare, S.H. Gawande, and M. Ingle, “Real-Time Static Hand Gesture Recognition for American Sign Language (ASL) in Complex Background,” Journal of Signal and Information Processing, 3(3), p.364, 2012.
- J. Rekha, J. Bhattacharya and S. Majumder, “Shape, texture and local movement hand gesture features for indian sign language recognition,” In Trendz in Information Sciences and Computing (TISC), 2011 3rd International Conference on (pp. 30-35). IEEE, 2011.
- N.H. Dardas and N.D. Georganas, 'Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques,' Instrumentation and Measurement, IEEE Transactions on 60, no. 11: 3592-3607, 2011.