Indian Sign Language Recognition(ISLAR)

Akshay Bahadur (~akshaybahadur21)


2

Votes

Description:

Abstract

Sample this – two cities in India; Mumbai and Pune, though only 80kms apart have a distinctly varied spoken dialect. Even stranger is the fact that their sign languages are also distinct, having some very varied signs for the same objects/expressions/phrases. While regional diversification in spoken languages and scripts are well known and widely documented, apparently, this has percolated in sign language as well, essentially resulting in multiple sign languages across the country. To help overcome these inconsistencies and to standardize sign language in India, I am collaborating with the Centre for Research and Development of Deaf & Mute (an NGO in Pune) and Google. Adopting a two-pronged approach: a) I have developed an Indian Sign Language Recognition System (ISLAR) which utilizes Artificial Intelligence to accurately identify signs and translate them into text/vocals in real-time, and b) have proposed standardization of sign languages across India to the Government of India and the Indian Sign Language Research and Training Centre.

As previously mentioned, the initiative aims to develop a lightweight machine-learning model, for 14 million speech/hearing impaired Indians, that is suitable for Indian conditions along with the flexibility to incorporate multiple signs for the same gesture. More importantly, unlike other implementations, which utilize additional external hardware, this approach, which utilizes a common surgical glove and a ubiquitous camera smartphone, has the potential of hardware-related savings of as much as US$100mn+ at an all-India level. ISLAR received great attention from the open-source community with Google inviting me to their India and global headquarters in Bangalore and California, respectively, to interact with and share my work with the Tensorflow team.

Outline

  • Background of the problem - understanding the problems faced by the deaf and mute community.
    • 14 million people in India have speech and hearing impairment.
    • Current solutions are neither scalable nor ubiquitous.
  • Defining a strong problem statement
  • Key aspects while designing the application.
    • Building a low resource consuming machine learning model that can be deployed on the edge.
    • Eliminate the need for external hardware.
    • Phase 0 : Localizing just hand gestures.
    • Phase 1 : Adding your facial key points along with hand localization.
    • Phase 2 : Adding sequential information each frame for carrying the context this enabling the model to pick up the entire context.
  • Getting resources from Google and TensorFlow.
  • Results and conclusion
  • Future aspects

Demonstrations

  • Preparation [5 mins]
  • ISLAR Phase 0 [5 mins]
  • ISLAR Phase 1 [5 mins]
  • Presentation at Google, Bangalore [3 mins]
  • Presentation at Google, California [7 mins]

Target audience and outcome

This tutorial is aimed at machine learning practitioners who have relevant experience in this field with a basic understanding of neural networks and image processing would be highly appreciated. By the end of the session, the audience will have a clearer understanding of the problems being faced by an underrepresented community in India therefore, catalyzing the thought process of the attendees to address social issues in India as well as other developing countries.

Prerequisites:

  • Basic understanding and Coding experience in Python
  • Basic understanding of Machine Learning
  • Basic concepts of image processing

Video URL:

https://youtu.be/55k4frLOKPQ

Content URLs:

Intro video: https://youtu.be/55k4frLOKPQ

Elevator Pitch, Google California: https://youtu.be/QU-SIQ_qUeQ

Slides : Google Slides

Speaker Info:

Akshay Bahadur’s interest in computer science sparked when he was working on a women's safety application aimed towards the women's welfare in India and since then he has been incessantly tackling social issues in India through technology. He is currently working alongside Google to make an Indian sign language recognition system (ISLAR) specifically aimed at running on low resource environments for developing countries. His ambition is to make valuable contributions towards the ML community and leave a message of perseverance and tenacity.

He’s one out of 8 Google Developers Expert (Machine Learning) from India along with being one of 150 members worldwide for the Intel Software Innovator program.

Achievements

  • Invited by Google to present my research on ISLAR (Indian Sign Language) at Google headquarters in California.
  • Received "The Most Influential Young Data Scientist of the Year 2019" by The International Society of Data Scientists for my contributions in the field of Machine Learning.
  • Tutorial acceptance at the IEEE Winter Conference on Applications of Computer Vision (WACV 2020) - “Minimizing CPU utilization for Deep Networks”.
  • Delegate at the 2020 Harvard College Conference in Cambridge, Massachusetts, USA.
  • Awarded Top Innovator Award (2019) by Intel.
  • Contributed to Google’s open-source project (Quick, Draw) and NVIDIA’s open source project (Autopilot).
  • Presented my work along with the Google TensorFlow team at TensorFlow Roadshow (Bangalore).

Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Beginner
Last Updated: