Sign Language Interpretation Using Deep Learning

Topics:
Words:
2023
Pages:
4
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

Abstract

Sign language is a language that Deaf people use to communicate with other normal people in the community. Although the sign language is known to hearing-impaired people due to its widespread use among them, it is not known much by other normal people. In this project, we have developed a real-time sign language recognition system for people who do not know sign language to communicate easily with hearing-impaired people. The sign language used in this project is American Sign Language. And this project also provides a complete overview of deep learning-based methodologies for sign language recognition. This will benefit deaf and hearing-impaired people by offering them a flexible interpreting alternative when face-to-face interpreting is not available. And the main purpose of our project is to develop an intelligent system which can act as a translator between the sign language and the spoken language dynamically and can make the communication between people with speaking deficiency and normal people both effective and efficient.

INTRODUCTION

Very few people understand Sign language. Deaf people are usually deprived of normal communication with other normal people in the society. It has been observed that they find it really difficult at times to interact with normal people with their gestures, as only a very few of those are recognized by most people. Since people with speaking deficiency can’t talk like normal people so they have to depend on some sort of visual communication in most of the time. Sign language is a language that provides visual communication and allows individuals with hearing impairments to communicate with other normal individuals in the community. Hence, the need to develop automated systems capable of translating sign languages into words and sentences is becoming a necessity [8]. But the availability of such translator is limited, expensive and does not work throughout the life period of a deaf person. So, the solution is that computerized system is most relevant and suitable for translating signs expressed by deaf people into text and voice. In this Image Processing is used to better extract features from input images, that should be invariant to background data, translation, scale, shape, rotation, angle, coordinates, movements, etc. Also, Neural Network Model is used to recognize a hand gesture in an image. [3]. Deep Learning, a relatively recent approach to machine learning, which involves neural networks with more than one hidden layer. Networks based on deep learning paradigms enjoy more biologically inspired architecture and learning algorithms, in contrast to conventional feed-forward networks. Generally, deep networks are trained in a layer-wise fashion and rely on more distributed and hierarchical learning of features as it is found in the human visual cortex; The data used in this work are obtained from a public database and contain different hand gestures for recognition. In order not to over-bias learning, the image samples of hand gestures are used for training and testing the designed networks.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

The data in its raw form is provided as a pixel to pixel intensity [0-255] class-wise distributed XLS files and data preprocessing steps included conversion of the mentioned data to image format using PNG format 28*28 grayscale images. With the help of Scikit-Learn library, the array is shuffled randomly. Shuffling is needed for splitting array into train and test arrays. After splitting step, the model is created as sequential network and started fitting process. Fitting process ran through all train data. After, training step, the model and weights and neural network loaded into real-time recognition algorithm. The algorithm consists of two parts that run simultaneously for better accuracy 1) is extracting hands bound convex hull points, 2) is classifying hand image with convolutional neural network. When there are similar hand signs, the decision will be made according to those steps results.

LITERATURE SURVEY

Several Sign Language Recognition Systems have been researched and implemented by other researchers. A brief survey of the following is made and aggregated together which lead to proposed system.

In 2015, M. Mohandas et. al. proposed the system to use the two Leap Motion Controllers (LMCs) as a backbone of the Arabic Sign Language Recognition (ArSLR) system. In addition, the system includes a preprocessing, a feature extraction a classification stages for data acquisition. This system is portable easy to carry, also has wide scope in robotics areas and also in business field, this system can be operated via., USB.

In 2015, Celal Savur et. al. proposed a system to recognize American Sign Language (ASL) using the surface Electromyography (sEMG). They perform the two experiments one is based on offline system and second is based on Real time system. The raw sEMG signal was preprocessed (filtering), feature extracted, and classified in order to predict sign gesture. As a classification method and the feature extraction process, Support Vector Machine and Time Domain Information was used, the results are compared with tabulated results.

In 2017, Hemina Bhavsar et. al. presents signs in the form of hand gestures and this gesture are identified from images as well as videos. Features are finding by various Feature Extraction methods and classified by various machine learning methods. This paper describes comparison of various system on the base of classification method and accuracy rate. And joins the benefits of LBP (Local Binary Pattern), SP (Super Pixels) and SURF (Speede Up Robust Features) strategies. The device will make out of various sensors. Proposed SVM-KNN (Support Vector Machine and K-Nearest Neighbor) strategy can perceive single hand motions precisely utilizing webcam. The objective is to create a framework which makes simple for the correspondence amongst typical and hard of hearing imbecilic individuals with the assistance of picture preparing innovation.

In 2017, B. P. Pradeep Kumar et. al. proposed a novel methodology of hand gesture recognition system for American Sign Language (ASL), which will perceive communication via., gesture signals in an ongoing situation.

In 2017, Rabeet Fatmi et. al. proposed an efficient and non-invasive solution to translate ASL to speech utilizing two wearable armbands called Myo, which include glove-based techniques, camera-based systems and the use of 3D depth sensors. This is the strong statistical foundation.

In 2015, Oyebade K. Oyedotun et. al. proposed that more biologically inspired and deep neural networks such as convolutional neural network and stacked denoising autoencoders (SDAEs)are capable of learning the complex hand gesture classification task with lower error rates.

PROBLEM STATEMENT

The research done in Sign Language Recognition field are mostly done using glove-based system. in the glove-based system, sensors such as potentiometer, accelerators, etc. are attached to each of the finger. Based on their readings the corresponding alphabet is displayed. Over the years advanced glove devices have been designed such as the Sayre Glove, Dexterous Hand Master and Power Glove.

The main problem faced by this gloved based system is that it has to be recalibrated every time a new user puts the glove so that the fingertips are identified by the Image Processing unit. Since most of the gloves are made solely in very few totally different sizes, the simple factor is to use custom created gloves that dead fits the user’s hand. This makes a selected glove dead fitting just for one specific person. Also because of frequent use the glove will be broken. The connecting wires restrict the freedom of movement. It also complexes to implement and hardware requirement is also more. This system is not cost effective.

PROPOSED ARCHIECTURE

The system is designed to capture an input sign image which will undergoes various image processing techniques. Firstly, an input image will be converted from RGB to Greyscale. Noise present in the input image will be removed for better accuracy of the input gesture. Further the skin is detected from the image and hand gesture will be removed from the obtained image. Then the position and location of hand is determined, this processed image is then compared with the trained model. The further advanced version of the system might help the normal human being to convert their text into sign language which will build two-way communication.

  • Phase-1: In this phase, we have developed a User Interface in which a user can capture the image from webcam. These captured images are stored in the image input folder. Also, we have collected the hand gesture images which can be used to feed CNN model for training. This image collection includes 26 alphabets hand gestures and 0-9-digit hand gesture images. Each alphabet has 500 images.
  • Phase-2: In this phase, the images we have collected are given for the training the CNN model. In this the images are converted into grayscale and the images are feed to 70% training and 30% testing.
  • Phase-3: In this phase, the input image from the user is given to the CNN model, in which the input images and the images stored in the CNN model are compared. Based on the comparison the CNN model gives an output in text or audio format.

FUTURE SCOPE

We are developing a project that would enable deaf people to get more involved in society. The idea of project is that, a camera-based sign language recognition system that would be in use for the deaf for converting sign language gesture to text and then speech. Our objective is to design a solution that is intuitive and simple. Communication for majority of people will not be difficult.

This Sign Language Interpreter system will work as one of the futuristic of Artificial Intelligence and Computer Vision with user interface. It creates method to recognize hand gesture based on different parameters. And the main priority of this system is to be simple, easy and user friendly without making any special hardware. All computations will occur on single PC.

CONCLUSION

Mute people are isolated from the most common forms of communication in today’s society such as warning, or any other form of oral communication between people in regular daily activities. Sign language is a primary means of communication. So, to communicate using sign language there is a glove-based system through which communication is possible. But it has to be recalibrated every time whenever a new user uses a system. The connecting wires restrict the freedom of moment.

So, the solution to this problem is image processing with deep learning. The project is implemented in such a way that it does not require gloves. The gesture has to be formed in front of the camera and the output is given in the form of text or audio. Thus, we can conclude that the system can interpret American Sign Language in real time environment and can act as a communication device between a signer and a non-signer person.

REFERENCES

  1. Pratibha Pandey and Vinay Jain, “An Efficient Algorithm for Sign Language Recognition”, International Journal of Computer Science and Information Technologies (IJCSIT), Volume 6 (6), 2015.
  2. Akshay Jadhav, Gayatri Tatkar, Gauri Hanwate and Rutwik Patwardhan, “Sign Language Recognition”, International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE), Volume 7, Issue 3, March 2017.
  3. Oyebade K. Oyedotun and Adnan Khashman, “Deep Learning in Vision-Based Static Hand Gesture Recognition”, The Natural Computing Application Forum 2016.
  4. Dishita Patil, Prapti Raut and Malvina Lopes, “ ”
  5. Ashish S. Nikam and Aarti Ambekar, “Sign Language Recognition using Image Based Hand Gesture Recognition Techniques”, ResearchGate, [online], Available from: https://www.researchgate.net/publication/316732601_Sign_language_recognition_using_image_based_hand_gesture_recognition-techniques (November 2016).
  6. M. Mohandes, S. Aliyu and M. Deriche, “Prototype Arabic Sign Language Recognition using Multi-Sensor Data Fusion of Two Leap Motion Controllers”, 12th International Multi-Conference on Systems, Signals & Devices, 2015.
  7. Celal Savur and Ferat Sahin, “Real-Time American Sign Language Recognition System by using Surface EMG Signal”, 14th International Conference on Machine Learning and Applications (ICMLA), 2015.
  8. Hemina Bhavsar and Dr. Jeegar Trivedi, “Review on Classification Methods used in Image Based Sign Language Recognition System”, International Journal on Recent and Innovation Trends in Computing and Communication (IJRITCC), Volume: 5, Issue: 5, May 2017.
  9. B. P. Pradeep Kumar and M. B. Manjunatha, “A Hybrid Gesture Recognition Method for American Sign Language”, Indian Journal of Science and Technology, Volume 10(1), January 2017.
  10. Rabeet Fatmi, Sherif Rashad, Ryan Integlia and Gabriel Hutchison, “American Sign Language Recognition using Hidden Markov Models and Wearable Motion Sensors”, Transactions on Machine Learning and Data Mining, Vol. 10, No. 2 (2017) 41-55.
  11. https://en.wikipedia.org/wiki/American_Sign_Language, Accessed on 12/08/2018, at 02:30 p.m.
  12. https://www.kaggle.com/datamunge/sign-language-mnist, Sign Language MNIST, Kaggle, 2017, Accessed on 28/08/2018, a 12:30 p.m.
Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

Sign Language Interpretation Using Deep Learning. (2022, February 21). Edubirdie. Retrieved December 22, 2024, from https://edubirdie.com/examples/sign-language-interpretation-using-deep-learning/
“Sign Language Interpretation Using Deep Learning.” Edubirdie, 21 Feb. 2022, edubirdie.com/examples/sign-language-interpretation-using-deep-learning/
Sign Language Interpretation Using Deep Learning. [online]. Available at: <https://edubirdie.com/examples/sign-language-interpretation-using-deep-learning/> [Accessed 22 Dec. 2024].
Sign Language Interpretation Using Deep Learning [Internet]. Edubirdie. 2022 Feb 21 [cited 2024 Dec 22]. Available from: https://edubirdie.com/examples/sign-language-interpretation-using-deep-learning/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.