About Ishaara
Over 63 million people in India, or about 6.3% of the population, are hard of hearing. This group is expected to grow due to factors such as aging. Though they do not regard their hearing loss as a disability, but rather a different way of life, their social interactions can be significantly limited.

Our project focuses on creating a robust, scalable system to predict, interpret, and translate Indian Sign Language (ISL) in real-time, without requiring specialized hardware. ISL combines actions, facial expressions, and body language, differing from other sign languages like American Sign Language (ASL), which may use single-hand gestures; ISL usually uses both hands. This complexity presents challenges in developing an accurate machine learning model for ISL interpretation.

We annotated images by capturing them from various angles, about 23k in total and then augmented the data to create a more generalized model. This solution operates entirely without a backend, making it lightweight and highly efficient.

Our Journey
We're excited to share that this current website represents an evolution of our project. Our team initially developed the Ishaara platform using React, which you can still explore at our legacy website. This original version served as the foundation for our current Next.js implementation, showcasing our commitment to continuous improvement and adoption of modern web technologies.
Visit Legacy WebsiteSpecial Thanks

Nicholas Renotte
AI Educator and Developer
We extend our heartfelt thanks to Nicholas Renotte for his invaluable contributions to the field of artificial intelligence. His insightful YouTube videos have been an integral part of our project journey, providing clear explanations and practical demonstrations. His guidance through complex AI concepts has been superb, making difficult topics accessible and understandable.
Visit Nicholas's GitHubThe Faces Behind Ishaara
The Mentor Behind Ishaara

Fatima Anees Ansari
Professor



