Sign Language Detection For Deaf And Dumb People Using Flex Sensors

Topics:
Words:
1276
Pages:
3
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

ABSTRACT

Communication is the major problem faced by deaf and dumb people to convey their thought, ideas with other deaf and dumb people and with other normal people. People who are not suffered from deaf and dumb can absorb new information and knowledge through the daily activities, conversations and language that is spoken around them. But the people who are suffered from deaf and dumb problem have not that luxury. The main aim of this project is to provide voice to the people who are not able to speak. In this project sign language is converted into speech or voice using flex sensors, and microcontroller, APR9600 voice module.

INTRODUCTION

Communication is the way of discussing ideas, thoughts, feelings and information among people either verbal message or nonverbal message. People who are suffered from deaf and dumb problem are often tend to feel uncomfortable around other people who are not deaf and dumb, to draw their attention to their hearing problem. Those people are wants to be like their friends with good hearing, so this hearing problem drives a thought in them to do their daily activities on by own and to not take part in activities with those people who are not deaf and dumb. Sign languages are used as a medium of communication by the people who are dumb. The people who are deaf and dumb are used gesture language for communication, some of the normal people can understand this gestures but most of the time the normal people cannot understand this gestures they find some difficult to understand this sign language. In this project each sign have the respective voice output is obtained by speaker from APR9600 voice module and text message is obtained by using LCD display additionally those deaf and dumb people can operate their home devices like fan and lights through hand movement.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

EXISTING SYSTEM

In the Existing system deaf and dumb people using their hand gestures to communicate with people it is good in day time but comes to night time we have less amount of light under these conditions some of the people cannot see the gesture made by the deaf and dumb people. It leads to miscommunication between them.

PROPOSED SYSTEM

In this system the circuit is implemented to recognise the hand movement of a user. The circuit is having flex sensors along each fingers and the thumbs. The flex sensor is a device it generates output that depends upon degree of bend. These gesture movements are considered as analog outputs from the sensors are given to microcontroller. It processes the analog output and perform analog to digital signal conversion. The gesture is recognized and the respective text information is identified. The user need to know each gesture movement of respective message and he need to stand on the sign for three seconds. If we want to introduce more number of gestures yes we can there is no restrictions, even it encourages us to introduce some new gestures to the supporting library. These sensors are placed on each fingers and thumb. By bending the finger and thumb it generates the output as voltage variation, this voltage output is processed by microcontroller to produce required voice output is obtained by speaker and text message is obtained by LCD display. A pair of gloves along with sensors and microcontroller are used to mute people to communicate with the public in the required language.

MICROCONTROLLER

Arduino is used to build electronic projects, it is an open source platform. It has both physical programmable circuit board and piece of software that runs on your computer. This software is used to write and upload computer code to the physical board. It has Atmega328p controller, and it has 28 pins out of which 14 pins are digital input/output pins, 6 analog pins. Its operating voltage range is in between 7-12V, but it takes input as 5V and gives 3.3V to us. This microcontroller processes the data received by flex sensors and gives output. This output is fed to LCD, APR9600 voice module and NodeMCU server(It is an internet source).

NodeMCU

NodeMCU is a open source platform, their hardware design is open for edit/modify/build. NodeMCU consists of ESP8266 wi-fi enabled chip. The ESP8266 wifi chip developed by Expressif systems with TCP/IP protocol and hardware which is based on ESP-12. It is mostly used for development of Internet of Things, Embedded systems. It is used 10 bit ADC, it has serial peripheral interface and serial communication protocol. To communicate with the ESP8266-01 module using UART having specified baud rate. It takes the input from the micro controller and based on the inputs it will match the pattern with already fed pattern in the database.

SPEAKER

Speaker is electroacoustic transducer, it converts electrical audio signal into a corresponding sound signal. Generally speaker is used to produce audio output that can be heard by the listener. The input received by the speaker is audio signal from computer device or any voice module.

LCD(liquid crystal display):

LCD is a type of flat panel display which uses liquid crystals in its primary form of operation. It uses light modulating properties of liquid crystals combined with polarizers. Liquid crystals do not emit light directly, instead using a backlight or reflector to display, in this experiment it takes input from controller and produce text message as output.

Connect all the components as per block diagram. The circuit having flex sensors along each finger and the thumbs. The flex sensor is a device it generates output that depends upon degree of bend. These sensors are connected to arduino microcontroller device, it processes the analog signal made by flex sensors. This microcontroller is connected to APR9600 voice module and LCD. APR9600 voice module provides voice output via speaker and LCD displays the text message as output done by microcontroller.

CONCLUSION

Sign language is the only way to express their views and thoughts of deaf and dumb people with normal people. This system is more reliable and easy to operate. So we have to develop this project by adding more number of gestures and facial expressions.

REFERENCES

  1. Hand-talk Gloves with Flex Sensor: A Review AmbikaGujrati1, Kartigya Singh2, Khushboo3, Lovika Soral4, Mrs. Ambikapathy5
  2. L. Bretzner and T. Lindeberg(1998), ―Relative orientation from extended sequences of sparse point and line correspondences using the affine trifocal tensor,‖ in Proc. 5th Eur. Conf. Computer Vision, Berlin, Germany,1406, Lecture Notes in Computer Science, Springer Verlag.
  3. S. S. Fels and G. E. Hinton(1993), ―Glove-talk: A neural network interface between a data glove and a speech synthesizer,‖ IEEE Trans. Neural Network., 4, l, pp. 2–8.
  4. W. T. Freeman and C. D. Weissman (1995) , ―TV control by hand gestures, ‖presented at the IEEE Int. Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland.
  5. K. S. Fu, ―Syntactic Recognition in Character Recognition‖. New York: Academic, 1974, 112, Mathematics in Science and Engineering.
  6. H. Je, J. Kim, and D. Kim (2007), ―Hand gesture recognition to understand musical conducting action,‖ presented at the IEEE Int. Conf. Robot &Human Interactive Communication.
  7. J. S. Lipscomb (1991), ―A trainable gesture recognizer,‖ Pattern. Recognition.
  8. W. M. Newman and R. F. Sproull (1979), Principles of Interactive Computer Graphics. New York: McGraw-Hill.
  9. J. K. Oh, S. J. Cho, and W. C. Bang et al. (2004), ―Inertial sensor based recognition of 3-D character gestures with an ensemble of classifiers,‖ presented at the 9th Int. Workshop on Frontiers in Handwriting Recognition.
  10. D. H. Rubine (1991), ―The Automatic Recognition of Gesture,‖ Ph.D dissertation, Computer Science Dept., Carnegie Mellon Univ., Pittsburgh, PA.
  11. T. Schlomer, B. Poppinga, N. Henze, and S. Boll (2008), ―Gesture recognition with a Wii controller,‖ in Proc. 2nd Int. Conf. Tangible and Embedded Interaction (TEI’08), Bonn, Germany.
  12. T. H. Speeter (1992), ―Transformation human hand motion for tele manipulation,‖ Presence.
Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

Sign Language Detection For Deaf And Dumb People Using Flex Sensors. (2022, February 17). Edubirdie. Retrieved December 22, 2024, from https://edubirdie.com/examples/sign-language-detection-for-deaf-and-dumb-people-using-flex-sensors/
“Sign Language Detection For Deaf And Dumb People Using Flex Sensors.” Edubirdie, 17 Feb. 2022, edubirdie.com/examples/sign-language-detection-for-deaf-and-dumb-people-using-flex-sensors/
Sign Language Detection For Deaf And Dumb People Using Flex Sensors. [online]. Available at: <https://edubirdie.com/examples/sign-language-detection-for-deaf-and-dumb-people-using-flex-sensors/> [Accessed 22 Dec. 2024].
Sign Language Detection For Deaf And Dumb People Using Flex Sensors [Internet]. Edubirdie. 2022 Feb 17 [cited 2024 Dec 22]. Available from: https://edubirdie.com/examples/sign-language-detection-for-deaf-and-dumb-people-using-flex-sensors/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.