scorecardresearch

AI can detect what you’re typing just by hearing the sound of the keyboard! Threat to digital privacy?

Cutting-edge AI has unveiled a startling new threat: the Acoustic Attack. By analysing keyboard sounds, hackers can steal sensitive data, posing a hidden danger to digital interactions.

advertisement
AI reveals hidden threat known as 'Acoustic Threat'artificial intelligence
AI reveals hidden threat known as 'Acoustic Threat'
profile
New Delhi, UPDATED: Aug 7, 2023 18:40 IST

Highlights

  • AI reveals hackers can use typing sounds to steal data, highlighting an unexpected digital vulnerability
  • Ordinary keystrokes become avenues for cyberattacks, urging stronger online safeguards
  • The discovery underscores evolving cyber risks, pushing for smarter security approaches against emerging threats

Advancements in artificial intelligence have ushered in a new era of innovation, but with it comes the potential for novel cybersecurity threats. A team of researchers from Cornell University, spearheaded by Joshua Harrison, Ehsan Toreini, and Maryam Mehrnezhad, has recently unveiled a groundbreaking study in their research paper that delves into the realm of training AI to decipher keyboard input solely from audio cues.

advertisement

This emerging threat underscores the need for robust countermeasures to safeguard sensitive information in an increasingly interconnected world.

Recording & training for accuracy:

In their pioneering paper, the researchers divulge their methodology for training AI to predict typed text by capturing and analysing keyboard keystrokes. The team meticulously recorded keystrokes on a MacBook Pro, pressing each of the 36 keys 25 times, forming the foundation for the AI model's learning process.

advertisement
(Image credit: Cornell, Joshua Harrison, Ehsan Toreini, Maryam Mehrnezhad)
(Image credit: Cornell, Joshua Harrison, Ehsan Toreini, Maryam Mehrnezhad)

These auditory patterns, even those produced by non-mechanical membrane keyboards, were observed to contain subtle yet discernible differences. The AI model was remarkably successful in associating unique sound profiles with corresponding characters, achieving a remarkable accuracy rate of up to 95 percent.

Remote & local training scenarios

The researchers explored the versatility of their approach by examining both local and remote training scenarios. Using a microphone, the AI system was trained to decode keyboard input from the audio recording. Surprisingly, the accuracy of the AI model remained impressively high even when training remotely via applications such as Zoom, registering only a marginal drop to 93 percent. This underscores the potential for malicious actors to exploit everyday software tools for cyber espionage.

Mitigating the threat

While the prospect of an AI-powered keyboard audio attack is disconcerting, the research also highlights potential strategies to counteract such threats. Altering one's typing style and use touch typing emerged as an effective countermeasure, causing a substantial reduction in the AI's recognition accuracy, dropping from 64 percent to 40 percent.

Additionally, software solutions such as introducing white noise or generating extra keystrokes can introduce confusion to the AI's decoding process, diminishing its efficacy.

(Image credit: Cornell, Joshua Harrison, Ehsan Toreini, Maryam Mehrnezhad)
(Image credit: Cornell, Joshua Harrison, Ehsan Toreini, Maryam Mehrnezhad)

In the evolving landscape of cybersecurity challenges, the Cornell research team's work serves as a poignant reminder of the innovative methods that can be harnessed by malicious actors. As AI continues to revolutionise various domains, including cyber threats, it is imperative that individuals and organizations alike remain vigilant, embracing countermeasures to safeguard sensitive information from the clutches of novel, audio-driven attacks.

advertisement

To delve deeper into the study's intricacies, the comprehensive research paper provides invaluable insights for understanding and addressing this emerging concern.

Published on: Aug 7, 2023 17:29 ISTPosted by: Minaal, Aug 7, 2023 17:29 IST

COMMENTS 0

Advertisement
Recommended