The researchers at MIT are known for taking technology to a whole new level with their great inventions, that said the students at the MIT Media Lab have developed a prototype device called “AlterEgo” that transcribes and responds to the conversations users have with themselves in their mind. In simple words, it’s like having Siri or Google Assistant listening to your internal commands and then responding to the same.

Arnav Kapur an Indian-origin master’s student at the MIT Media Lab which is a division of the Massachusetts Institute of Technology that focuses on the intersection of people and technology, and author of the study paper, has stressed that the device doesn’t read thoughts and neither the random stray words which just happen to pass through your mind. “You’re completely silent, but talking to yourself,” he said. “It’s neither thinking nor speaking. It’s a sweet spot in between, which is voluntary but also private.

Watch the magic:

The prototype device looks like a headset included in the device that transmits vibrations through the bones of the face to the inner ear. As the headphones don’t obstruct the ear canal, this means the system can pass on information to the user without interrupting a conversation or interfering with the user’s aural experience. It uses electrodes to pick up neuromuscular signals in the user’s jaw and face that are triggered by internal verbalizations.

According to the research assistant Arnav Kapur, “The motivation for this was to build an IA device — an intelligence-augmentation device. We could have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” If the user were to internally question, “what is the time,” the AlterEgo headset would register this, and feed the answer back to them through bone conduction. This means the user doesn’t have to look at a screen and type in words to find the answer to their question, or to control a device.

Warable-Gadget-AlterEgo

 

The goal is to combine humans and computers, said Kapur. The more closely we interact with computers, the more we can benefit out of their strengths. The technology is still in its early stages, AlterEgo’s success depends on its accuracy of translating vibrations into words which at present is 92 percent, according to Arnav Kapur and his team at the MIT. While it is slightly less than the accuracy of Google’s voice transcription, Kapur says that the system will improve with use as it gets exposed to even more types of vibrations as well as words. The team at MIT is now working on the device to make it “More Invisible”.

For the latest from the world of magical innovations and inventions, stay tuned to Dopewope. Do give us a thumbs up on Facebook if you loved this story and would like to see more.

Fond of words and always look for the right ones to string together to tell a story. Always in search of something new to write. When not writing you will find him watching movies, reading on the internet or listing to Music. A huge Manchester United and Eminem fan.

Comments