
AI can detect what you’re typing just by hearing the sound of the keyboard! Threat to digital privacy?
Cutting-edge AI has unveiled a startling new threat: the Acoustic Attack. By analysing keyboard sounds, hackers can steal sensitive data, posing a hidden danger to digital interactions.
artificial intelligence
Highlights
- AI reveals hackers can use typing sounds to steal data, highlighting an unexpected digital vulnerability
- Ordinary keystrokes become avenues for cyberattacks, urging stronger online safeguards
- The discovery underscores evolving cyber risks, pushing for smarter security approaches against emerging threats
Advancements in artificial intelligence have ushered in a new era of innovation, but with it comes the potential for novel cybersecurity threats. A team of researchers from Cornell University, spearheaded by Joshua Harrison, Ehsan Toreini, and Maryam Mehrnezhad, has recently unveiled a groundbreaking study in their research paper that delves into the realm of training AI to decipher keyboard input solely from audio cues.
This emerging threat underscores the need for robust countermeasures to safeguard sensitive information in an increasingly interconnected world.
A Practical Deep Learning-Based Acoustic Side
— near (@nearcyan) August 5, 2023
Channel Attack on Keyboards
correctly identifies 93% of keystrokes via sounds over zoom calls
(reminder: scaling laws apply to ~every modality)
arxiv: https://t.co/NFzNMdD337 pic.twitter.com/B4gMTcDTJ3
Recording & training for accuracy:
In their pioneering paper, the researchers divulge their methodology for training AI to predict typed text by capturing and analysing keyboard keystrokes. The team meticulously recorded keystrokes on a MacBook Pro, pressing each of the 36 keys 25 times, forming the foundation for the AI model's learning process.

These auditory patterns, even those produced by non-mechanical membrane keyboards, were observed to contain subtle yet discernible differences. The AI model was remarkably successful in associating unique sound profiles with corresponding characters, achieving a remarkable accuracy rate of up to 95 percent.
Remote & local training scenarios
The researchers explored the versatility of their approach by examining both local and remote training scenarios. Using a microphone, the AI system was trained to decode keyboard input from the audio recording. Surprisingly, the accuracy of the AI model remained impressively high even when training remotely via applications such as Zoom, registering only a marginal drop to 93 percent. This underscores the potential for malicious actors to exploit everyday software tools for cyber espionage.
Mitigating the threat
While the prospect of an AI-powered keyboard audio attack is disconcerting, the research also highlights potential strategies to counteract such threats. Altering one's typing style and use touch typing emerged as an effective countermeasure, causing a substantial reduction in the AI's recognition accuracy, dropping from 64 percent to 40 percent.
Additionally, software solutions such as introducing white noise or generating extra keystrokes can introduce confusion to the AI's decoding process, diminishing its efficacy.

In the evolving landscape of cybersecurity challenges, the Cornell research team's work serves as a poignant reminder of the innovative methods that can be harnessed by malicious actors. As AI continues to revolutionise various domains, including cyber threats, it is imperative that individuals and organizations alike remain vigilant, embracing countermeasures to safeguard sensitive information from the clutches of novel, audio-driven attacks.
To delve deeper into the study's intricacies, the comprehensive research paper provides invaluable insights for understanding and addressing this emerging concern.
COMMENTS 0