About Ishaara
Our mission is to break down communication barriers and build a more inclusive digital world using cutting-edge, on-device Artificial Intelligence.
Over 63 million people in India, or about 6.3% of the population, are hard of hearing. This group is expected to grow due to factors such as aging. Though they do not regard their hearing loss as a disability, but rather a different way of life, their social interactions can be significantly limited.

Our project focuses on creating a robust, scalable system to predict, interpret, and translate Indian Sign Language (ISL) in real-time, without requiring specialized hardware. ISL combines actions, facial expressions, and body language, differing from other sign languages like American Sign Language (ASL), which may use single-hand gestures; ISL usually uses both hands. This complexity presents challenges in developing an accurate machine learning model for ISL interpretation.

We annotated images by capturing them from various angles, about 23k in total and then augmented the data to create a more generalized model. This solution operates entirely without a backend, making it lightweight and highly efficient.

Our Journey
We're excited to share that this website represents an evolution. Our team initially developed the Ishaara platform using React. This original version served as the foundation for our current Next.js implementation, showcasing our commitment to modern web technologies.
Visit Legacy WebsiteSpecial Thanks

Nicholas Renotte
AI Educator & Developer
His insightful YouTube videos have been an integral part of our project journey. His superb guidance through complex AI concepts made difficult topics accessible and understandable.
Visit GitHubThe Faces Behind Ishaara
The Mentor Behind Ishaara

Fatima Anees Ansari
Professor



