IoT Based Sign Language Translator Device

Topics:
Words:
1659
Pages:
4
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

ABSTRACT

Sign languages are natural languages that deaf people use to speak with other ordinary human beings in the community. Although it is believed that the signal language for people with hearing problems due to its considerable use among them, it is not widely known by other normal humans. In this project, we have developed a sign language recognition device for those who now do not understand sign language, to communicate effortlessly with people with hearing problems. Conversion of sign language into text through personalized segmentation of the Region of interest (ROI) and the convolutional neural network (CNN). Five signal gestures are trained in the use of personalized photographic data sets and are performed in Raspberry Pi for portability. Using the preferred ROI technique, the approach shows higher effects than conventional processes in terms of precision stage and real-time detection from video streaming through the webcam. In addition, this technique serves to offer an efficient model that ultimately leads to the simple addition of multiple signs and symptoms to the latest prototype that used the Raspberry Pi. Human-computer interaction advances with concern for sign language interpretation. The Indian Sign Language Interpretation Device (ISL) is a good way to help Indians listen to humans with disabilities to interact with normal humans with the help of a computer. Compared to other sign languages, the interpretation of the ISL has much less hobby through the researcher. This document presents some background, needs, scope and challenges of the ISL. The vision based primarily on the hand gesture recognition system had been called a vital mode of life communication

INTRODUCTION

Sign languages ​​are natural languages ​​that develop in communities of deaf people around the world and vary from place to place. A signal to be done with hands and non-manual components that are partially produced in parallel, but are not perfectly synchronized. Manual components include hand configuration, joint location, hand movement and hand orientation, while non-manual components include body posture and face. Address Space Layout Randomization can be a CV and Automatic Speech Recognition subfield that allows the implementation of methods from both worlds, but also inherits the respective challenges. Great variability of the inter or intrapersonal signature, strong co-articulation effects, context-dependent classification gestures, there is no agreed written form or a definition similar to a phoneme in combination with partially parallel information flows, high signature speed that induces motion blur , the missing features and, therefore, the need to automatically trace the hands and face make the video-based Address Space Layout Randomization a notoriously challenging research field currently.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

Very low people understand the language. Deaf people generally lack traditional communication with several traditional individuals in society. Deaf people have been shown to notice that it is extremely difficult from time to time to move around with traditional people with their gestures, most people recognize very few. Since people with language deficiency cannot speak like traditional people to rely on a variety of communication in most cases. The language could be a language that has communication and allows people with hearing problems to speak with several traditional people in the community. Therefore, the need to develop automatic systems that can translate sign languages ​​into words and sentences is becoming a necessity. and, therefore, the ability of a translator of this type is truly limited, expensive and does not work for the whole life of a person with a disability. So, the answer is that the elaborate system is more relevant and appropriate for translating the signs expressed by deaf people into text and voice. The image processing methodology is used for a greater extraction of the options from the input images, which must be invariable to the background information, translation, scale, shape, rotation, angle, coordinates, movements etc. In addition, the neural network model is used to recognize a hand gesture in a photo. Deep learning could be a relatively recent approach to machine learning involving neural networks with more than one hidden level. In general, deep networks are trained in an extremely stratified way and are based on a large number of distributed and qualifiable learning options because they are located within the human visual cortex; The information used in this work is obtained from an environment that includes and contains totally different hand gestures for recognition. In order not to distort the learning excessively, the hand gesture image samples are used to train and test the designed networks.

METHODOLOGY

The method endorse during this work is divided into two parts. At the beginning, CNN trains with the images of the training set and then the trained model is implemented in Raspberry Pi which is connected to the webcam, which eventually performs the task of detecting signs and labelling on the screen connected to the Pi.

Learning communication in Indian language

Initially, a custom image data set is created as a training set for CNN. The data is preprocessed and augmented and finally changes its size to 96 × 96. When using the scikit tag encoder, the tags for each class are normalized. The encoded files are saved as .npz files for faster processing of the training set on the network. The configuration of the convolutional neural network consists of three convolutional layers, two fully connected layers. To normalize the characteristics of the layers, the batch normalization technique is used with each convolutional layer. After completing the coaching, the model is tested with a new series of test images that the network does not know to obtain an impartial precision result.

Hardware combination

The correct union of the above hardware with the trained and tested model was necessary to ensure the portability and stability of the device in terms of the user's perspective. Raspberry pi of relatively low cost is taken into consideration for this case and has led to an excellent combination, maintaining the same times and level of precision previously tested on a computer

DESIGN

In this section, the key parts of the modelling aspect of the whole system and also the CNN architecture used for the proposed system are explicitly described. Since the main concern of this work was to reduce training times as well as increase accuracy rather than conventional approaches, therefore preventive data processing was important to reduce oversizing and lack of adjustments.

Modelling the signal-to-text conversion

The very first step was to create the image data set for five Bengali signature signs. since the data obtained for each sign were not significant in number and different enough to feed the CNN, therefore the images were increased with a three-phase enlargement process and further pre-processed by combining tags, the coding and each of the images is been scaled to 96 × 96 dimensions. Then, the training dataset was entered into the CNN model and after completing the training, the prediction was achieved in the test set. So, the trained model was implemented in the webcam and showed the connected Raspberry Pi. The identical model is now tested to create a final prediction for the signs of the Raspberry Pi video stream. The modelling steps ranging from preparation of the image data set to the final forecast of the video transmission are shown in sequence.

CNN Architecture

The proposed model is based on the CNN architecture which has proven to be very reliable when working with image data sets. The sequential model used here consists of three convolutional layers and two fully connected dense layers. The activation function of the linear rectified measurement is used for all the layers, except the last dense layer during which the SoftMax function is used as the activation function. The different CNN layers extract unique features from the images. To reduce the loss function, the cross-entropy loss is used to fill in the model.

This number of frames per second is sufficient for the calculation. The frame number will result in a longer count as more data is processed. The image acquisition process is subject to various environmental factors, such as exposure to sunlight, the background and foreground objects. To create a feature extraction more easily, a noticeable (white) background is less complicated to imagine later. This part consists of manual segmentation followed by morphological operations. A method for this is often proposed as an adaptive coloring model for hand segmentation by mapping the color space into a color plane. Another method for segmentation and manual monitoring is based on the HSV histogram. These methods can segment the hand into simpler as a complex background. To simplify, the segmentation of the hand is carried out by transforming the acquired image into a black and white image, where the background will be white and also the foreground, that is, the hand will be black. In our way of pre-processing the acquired images is that of the color threshold method. Using this method, the color region is often segmented and the location of the region of interest is also often determined. Then, the segmented color images are analyzed to obtain the unique characteristics of each linguistic communication record.

CONCLUSION

The present method offers efficient resolution for sign language learning methods. In these approaches, huge training data is required and the detection of video signals has also been very slow. When using a custom ROI segmentation method, the model no longer has to go through a computationally heavy workflow to locate the hand area on its own. The device user can move the preloaded selection box on the screen to the area of ​​the deaf person's hand and therefore only the area within the selection box is sent to the CNN model for prediction. The integration of the model with Raspberry Pi adds flexibility and ease of portability to the device. As a result, the accuracy level and detection speed are increased. Due to the lack of availability of the data set in the Indian sign language, only Five signs are used to create this translator device. In the future, more signals will be added to the device and a graphical user interface will be introduced along with the existing model to improve its operation.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

IoT Based Sign Language Translator Device. (2022, February 17). Edubirdie. Retrieved December 22, 2024, from https://edubirdie.com/examples/iot-based-sign-language-translator-device/
“IoT Based Sign Language Translator Device.” Edubirdie, 17 Feb. 2022, edubirdie.com/examples/iot-based-sign-language-translator-device/
IoT Based Sign Language Translator Device. [online]. Available at: <https://edubirdie.com/examples/iot-based-sign-language-translator-device/> [Accessed 22 Dec. 2024].
IoT Based Sign Language Translator Device [Internet]. Edubirdie. 2022 Feb 17 [cited 2024 Dec 22]. Available from: https://edubirdie.com/examples/iot-based-sign-language-translator-device/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.