A new gadget gives new meaning to the phrase "your face betrays your thoughts." Researchers from the Massachusetts Institute of Technology developed a computer interface that puts words with a wearer's internal thoughts.
The computer system consists of a wearable device that uses electrodes to pick up on the wearer's neuromuscular signals along the jawline and face. Saying words 'in your head' is all it takes for the device to pick up what the user wanted to say that might be undetectable to other humans. The headpiece also includes bone-conduction headphones. These send vibrations from the bones found in a user's face to the inner ear. However, as they're not traditional headphones, they don't ruin the quality of hearing in normal conversations.
According to the research team, they wanted to test how well a computer system would respond to internal signals rather than external factors.
“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
Professor of media arts and sciences Pattie Maes served as Kapur's thesis advisor.
“We basically can’t live without our cellphones, our digital devices,” Maes said. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.
"So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”
While this technology is relatively new, the idea of internal verbalizations relating to physically saying something stems from the 19th century. In the eaelry 1950s, researchers even took a stab at trying to better understand the theory. In fact, the speed-reading movement of the 1960s stemmed from growing research about eliminating internal vocalization.
However, this concept when paired with a computer interface hasn't been heavily explored and documented until now.
The MIT team first had to create code in order to analyze data and signals from particular electrodes placed around the face. The researchers identified seven particular electrode locations along the mouth and the jawline. They then tested out how the system responded to a limited vocabulary. After they'd gathered that data, they used a neural network to determine correlations between neuromuscular signals and the words from that limited vocabulary. This allowed the team to create a neural network that trained to identify these subvocalized words and even be customized to a particular user.
During testing, the computer interface had an average accuracy of 92 percent, and Kapur noted that the system will improve over time.
"We’re in the middle of collecting data, and the results look nice," Kapur said. "I think we’ll achieve full conversation some day."
While this new tech certainly adds to our understanding of internal vocalization, the applications could be bigger than even the reserachers expect.
“I think that they’re a little underselling what I think is a real potential for the work,” said Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”
“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”
Source: Interesting Engineering