So you think you’ve got this whole digital-security thing covered. You don’t reuse passwords; you don’t click suspicious links; you might even use burner email addresses. You’re invincible. Except, whoops, what’s that? You still got hacked? Wait, you haven’t been typing lately, have you? Rookie mistake.
As reported by Bleeping Computer, researchers have successfully trained an AI model to identify specific keystrokes on keyboards using the built-in microphone in either the computer or a hacked smartphone. The worst part? The model those researchers created can guess which key was pressed with an accuracy of 95%. Don’t worry, though: When they used Zoom to train the model, the accuracy dropped to 93%. We’re saved.
In all seriousness, it’s not hard to see why “acoustic attacks” are bad news: An AI model like this could be deployed to spy on people’s typing habits and pick up everything from sensitive information to passwords.
Imagine opening Slack, typing out a privileged message to your boss, then launching your bank’s website and typing your username and password to check your account.
This AI system could pick up on up to 95% of that, which, in the long term, means it’s aggregating the vast majority of what you type.
How does this (hypothetical) acoustic attack work?
To start, an attacker would record you as you type on your keyboard, picking up the audio through your computer or another miked device, like your smartphone. Another method is to target a member of a Zoom call and analyze the sounds of their typing to the corresponding message that appears in the chat.
And how did researchers train their model to identify these specific keyboard sounds? Why, they used computers from the company most likely to brag about privacy and security: Apple. Researchers pressed 36 individual keys on new MacBook Pros a total of 25 times each, then ran through recordings through software to identify tiny differences between each key. It took some trial and error to achieve the final result, but after enough testing, researchers could identify keystrokes at 95% accuracy when recording from a nearby iPhone, and 93% using the Zoom method.
How to protect yourself from (again, hypothetical) acoustic attacks
The good news is this particular AI model is purely created for research purposes, so you don’t need to worry about running into it in the wild. That said, if researchers could figure it out, attackers mightn’t be far behind.
Knowing that, you can protect yourself by being mindful of the process: This attack only works if a microphone is recording your keystrokes, which means your computer or phone need to have been hacked ahead of time, or you need to be on a Zoom call with an attacker. Knowing that, keep tabs on your device’s microphone permissions, and disable access to any app that doesn’t seem like it needs it. If you see your microphone is active when it shouldn’t be, that’s a red flag as well.
You should also mute yourself whenever you aren’t actively speaking on a Zoom call: It’s good practice anyway, but it’s especially useful if there’s an attacker on the call. If you’re muted while typing your messages in the chat, they can’t use it against you.
To avoid being hacked in the first place, make sure you’re following the usual security tips as well: Don’t click on strange links, don’t open messages from strange senders, and don’t download and open files you aren’t sure about.
Password managers are your friend
That being said, let’s say you are hacked without knowing it, and your phone is listening to your keystrokes. It’s good practice to rely on password managers whenever possible, especially those that use auto-fill: If you can log into your accounts with a face scan or fingerprint scan, there won’t be any passwords typed to worry about. You could also run white noise near your devices, so any sound recordings would be useless.