Introduction
Myo armbands have been used by developers to create a variety of functions, from controlling characters in video games, to replacing computer keyboards and mice with virtual, on-screen versions. However, since communication between hearing people and deaf people is difficult and it can be frustrating specially if there’s an emergency, it would be much more interesting if we use the myo armband in sign language.
Wearable devices are much practical than any other device thanks to their level of accessibility and the simple use. Although there has been a lot of research in this domain, there is room for work towards a system that is ubiquitous, non-invasive, works in real-time and can be trained interactively by the user.
Save your time!
We can take care of your essay
- Proper editing and formatting
- Free revision, title page, and bibliography
- Flexible prices and money-back guarantee
Place an order
In this project we tried using different classification algorithms and we did a comparison between them The results obtained show that it is possible to identify the gestures, but substantial limitations were found that would need to be tackled by further studies.
Related Work
Many research works have conducted in the field of gesture-based wearables, especially in the last years. On the XVIII Symposium on Virtual and Augmented Reality, the work of four researchers that is about evaluating sign language recognition using the myo armband was presented during the con- ference paper on june 2016.[1] the research is based on the evaluation of EMG data provides by the myo armband as features to classify 20 stationary letter gestures. The results obtained by SVM classification show that it is possible to identify the gestures, but substantial limitations were found such as the perception of fine finger gestures.
In the same year, a work that aims to implement a real-time system using wearable technology for translating sign language gestures into audible form was taking place in the university of Richmond.[2] Using the k-NN classifier, the researcher was able to obtain up to 98% accruacy when classifying 20 different American Sign Language gestures.
On the 25th European Signal Processing Conference, a new model for real-time hand gesture recognition was presented.[3] Using the EMG of the forearm, acquired signals was processed to be an input data to the model. The k-NN rule together with the dynamic time warping algorithm was used in the classification stage. the model presented has better recognition accuracy than the Myo system (5 default classes).
Beside the use of wearable technologies in the hand-based gesture recognizing to control connected devices, the myo armband is used in prosthetic limb research to help control motor powered prosthetic limbs. At the Johns Hopkins Applied Physics Laboratory in the US, the team behind Myo is working on ways to incorporate the armband into the prosthetics industry.[4] The Myo armband reads Electromyographic pulses (EMC) caused by skeletal muscles and sends the signals to a waiting computer who analyzes the signals in order to determine which movement he wants to make before sending the order back to the prosthetic arm. The armband is chunky and not necessarily comfortable to wear for long periods of time.
The Myo gesture control armband was developed by the Canadian firm Thalmic labs, which integrates on a wireless device inertial. the device is wearable on the forearm so that it reads the muscles activity of the arm, fingers and the hand. The Myo transmits its measurements to the computer via Bluetooth the Myo armband possesses 8 EMG sensors that captures data from the muscles, it provides also 9 axis IMU data through an accelerometer sensor, a gyroscope sensor, and a magnetometer sensor. the Thalmic labs, known now as North, have launched the Myo armband on 2013, the armband recognize 5 pre-set gestures (wave left, wave right, double tap, fist and fingers spread) in order to control the mouse, a video or even a powerpoint presentation. Myo braclet provides EMG data at a frequency of 200 Hz and the IMU data at a frequency of 50Hz, consisting of a timestamp and the values cap- tured by each sensor, in the range [-128, 127].[1]
In our work of sign language classification, we saw the use of the armband more interesting because of its relatively low cost, small size and weight, and software development kit SDK capabilities which allow us to access the raw data transmitted from the device to a connected computer, a shematic of the armband is given in Figure 1. so that the device works properly, it must be positioned on the forearm of the user with the usb port pointed toward the user’s hand.
Unsupervised solution
In the unsupervised method, the training data are not necessary, the classification is done gradually and in real time. Once new data are acquired by the Myo device, the pre-process are done by extracting features. The next step would be to compare those calculated features with features from the existing clusters or known words. the distance between features is calculated and stored.
The cluster that correspond to the minimal distance is maybe the cluster that our testing data belongs to, but the idea must apply to all the features, or at least, 60% of the features. If so, the cluster’s features must be recalculated, if else, a new cluster with the data testing features is created. In the next page is the pseudo-code of the algorithm that allows the unsupervised classification.
Conclusion
In this report, we have presented different results with three supervised classification algorithms evaluated on a dataset of 20 classes. the results were in the favor of the SVM and decision tree algorithms over the LDA algorithm. the results are due to the fact that SVM focuses on the words that are difficult to classify, in the other hand, LDA focuses on all data, and by that, reduce it effectiveness. The proposed model is an unsupervised solution to be implemented. This method wont be needing any training data, which it’s remarkable advantages. The model when is to be implemented in real time, the complexity and the time process (latency) of the algorithm is to be considered, not only to calculate the distances between features, but also when extracting the features while collecting the data from the myo device.
Future work is to implement the proposed unsupervised model capacity to recognize words in real time with the Myo device. Additionally, different feature extraction techniques could make a difference to the results.
References
- João Gabriel Abreu, João Marcelo Teixeira, Lucas Silva Figueiredo, Veronica Teichrieb. Evaluating Sign Language Recognition Using the Myo Armband. XVIII Symposium on Virtual and Augmented Reality, June 2016.
- Jackson Taylor. Real-time translation of American Sign Language using wearable technology. University of Richmond, 2016.
- Marco E. Benalcázar, Andrés G. Jaramillo, Victor Hugo Andaluz Hand Gesture Recognition Using Machine Learning and the Myo Armband. EUSIPCO, 2017.
- Charlie Osborne Myo gesture control armband augments prosthetic arm . https://www.zdnet.com/article/myo-gesture-control-armband-augments-prosthetic-arm/, january 19, 2016.
- Hiroki OHASHI, Mohammad AL-NASER, Sheraz AHMED, Takayuki AKIYAMA, Takuto SATO, Phong NGUYEN, Katsuyuki NAKAMURA, Andreas DENGEL Augmenting Wearable Sensor Data with Physical Con- straint for DNN-Based Human-Action Recognition. August, 2017.
- Fernando Cosentino PyoConnect. http://www.fernandocosentino. net/pyoconnect/
- Prajwal Paudyal Myo armband dataset for some American Sign Language Signs. https://data.mendeley.com/datasets/wgswcr8z24/2, v2, 2018
- https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html
- https://medium.com/datadriveninvestor/classification-algorithms-in-machine-learning-85c0ab65ff420
- https://machinelearningmastery.com/linear-discriminant-analysis-for-machine-learning/