Learn2Sign

Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide effective feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained and effective feedback on location, movement and hand-shape to learners of American Sign Language. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to decrease time required to provide feedback. The system has an overall accuracy of 87.9 % on real-world data consisting of 25 signs with 3 repetitions each collected from 100 learners.

Here is a version(still being updated) of the paper being presented at IUI 2019. Final version will be uploaded after the conference. 

Initially, to gather user opinion we conducted a survey of 52 new learners of American Sign Language at a University. The results of the survey is summarized in the Figure below. There were 29 males and 21 females within the age group of 18-40. The survey was conducted around August 2018.

Researchers:

Prajwal Paudyal, Junghyo Lee, Azamat Kamzin, Amine Soudki, Ayan Banerjee, Sandeep Gupta

Methodology

Data

As part of this project, and in solidarity with the  ideas from the OpenData Initiative, we will release the data used in two waves.
After the ACM IUI 2019, 
Workshop on Explainable Smart Systems, for the first wave we are releasing data in .csv format. In the next wave,  we will release the anonymized video data for all participants. Please note that this data is from read world usage, and thus has noise.

CSV data (Coming soon)

Video Data (Coming soon)

IRB Information

If you use this data, please cite the following papers:

Contact

Prajwal Paudyal: ppaudyal at asu dot edu 
Junghyo Lee : jlee375 at asu dot edu
Azamat Kamzin: akamzin at asu dot edu

 

Faculty:

Dr. Ayan Banerjee
Dr. Sandeep Gupta 

Future Directions

We are doing a piloting a full-scale study of the effectiveness of this application, in summer 2019. If you would like to participate while learning some ASL signs please contact Prajwal . All participants will get a t-shirt and a chance to win a gift card.

Acknowledgements

We thank SigningSavvy who graciously let us use their tutorial videos in the application