This guest post was written by Madhav Lavakare. Madhav is a young entrepreneur from New Delhi, India. He was passionate about tinkering, creating and innovating from a young age and has a keen eye for problem solving. With TranscribeGlass he wants to solve the biggest problem he has tackled so far: bridging the communication gap between the D / Deaf and the hard of hearing community and mainstream society.
Hi! I’m Madhav Lavakare, a 19 year old from New Delhi, India. I am an avid hobbyist and doer and like to consider myself an imagineer (noun: resourceful engineer). I’ve been building things since I was six years old, from solar-powered ovens to baking chocolate chip cookies because my protective parents didn’t allow me to use the oven, to a burglar alarm so as not to mess up the parents room, to a home automation system that turns off all the lights and electrical appliances in my room (because my sensible parents wouldn’t stop telling me to save electricity – feel a pattern here?). The point is, I love solving problems and developing solutions!
Three years ago, when I was in my junior year of high school, one of my friends with hearing loss suddenly dropped out of school. When I asked why he stopped, he told me that he had severe hearing loss and could not understand what was being taught, nor could he understand the conversations around him. I asked him about hearing aids or cochlear implants, but he said they were way too expensive, let alone invasive ones. I started looking at subtitles and speech-to-text transcription apps. The Live Transcribe app hadn’t been created at the time, but there were other apps like Otter too. So I asked my friend why he couldn’t use a voice text app. He said it was virtually impossible for him to understand what was being taught when he constantly had to look down and back between a phone screen and the speaker.
From further reading and research, I understood the following. Hearing aids and cochlear implants are expensive and unaffordable for many deaf people, especially in developing countries like India. Smart glasses are also expensive and can be heavy and inconvenient – they are often designed for different use cases. The reliance on subtitles as a solution via automatic speech recognition (ASR) or human subtitles has increased. However, they usually appear on mobile devices or screens that you have to look back and forth to read the subtitles and you are missing out on many important aspects of communication. These include – key visual cues such as lip reading, hand gestures, and facial expressions; Engagement with the speaker through eye contact; the ability to see all of the materials they present; and environmental awareness, which can help identify speakers and localize sounds. Plus, you miss the ability to have your hands free to do other tasks, and you might have less engaging and meaningful social interaction – constantly looking at a phone isn’t very conducive to an interactive conversation, is it?
And so I started thinking about building affordable “real-time smart captioning glasses” for people with hearing loss.
And that’s the genesis of TranscribeGlass. In my next blog post, I’ll be describing TranscribeGlass and talking about the advances we’re making in the market!
Check out the gallery below to see how people are using TranscribeGlass.
Comments are closed.