Growth of AI-voice cloning scams alarms US, here’s how hackers are using it
In the wake of rising complaints related to voice cloning scams through AI(artificial intelligence), US authorities are disseminating real life examples of victims and ways to handle it.

Highlights
- Scammers try to mimic the voice of a person’s relative to dupe their minds
- 70% of the USA citizens could not distinguish between real and cloned voice
Amid the growing distress caused by AI tools to clone people’s voices, the Federal Trade Commission (FTA) in the USA issued a warning against such scams and highlighted the need to address them.
Victims are duped into believing they are speaking to a relative who may require money to cover expenses related to a car accident or to pay a ransom for a kidnapping. Experts familiar with the matter said that such scams can indeed be disturbing but they do work.
Feels like real AI voice scams
"This is, without hesitation, the scariest thing I have ever seen," said Scott Hermann, the founder of financial and identity protection company, IdentityIQ.
While condemning voice scams, Hermann further stated that criminals may create an uncannily similar clone of someone's voice with just a 20-second audio clip of them speaking, which is frequently taken from social media.
"Help me, mom, please help me," Jennifer DeStefano, an Arizona-based mother, heard a voice saying on the other end of the line, DeStefano told a local television station in April. On hearing of such a plight, DeStefano believed the voice and got convinced that it was her 15 year old daughter in deep distress while away on a skiing trip.
It's just not Jennifer DeStefano, but other parents and relatives of many people who reported the same incident of such scams. Wasim Khaled, chief executive of Blackbird.AI, stressed on the stimulating use of deep fakes and mentioned, "Scammers can employ different accents, and genders, or even mimic the speech patterns of loved ones. [The technology] allows for the creation of convincing deep fakes."
What are voice related AI scams?
Voice related scams are carried out by utilising artificial intelligence technology which steals the information of a person’s family members through social media or any online platform. After generating the information, fraudsters with the help of neural network modulations that are heavily dependent on deep models, generate an AI-based synthetic voice. Furthermore, when combined with text-to-speech techniques, the cloned voice could be made to say virtually anything.
Additionally, the generative adversarial networks (GANs) which are used to generate artificial voices have also advanced with new deep-learning models that are fine-tuned with realistic synthetic voices using minimal audio data and computing power.
The capacity of artificial intelligence to blur the lines between reality and fiction, according to experts, poses the greatest threat since it will provide cybercriminals with a simple and efficient tool for spreading misinformation.
According to a survey published last month by US-based McAfee Labs, almost seventy percent of the respondents said they were not confident and could differentiate between a cloned voice and the real thing.
Moreover, one in four respondents to a global study of 7,000 people from nine nations, including the United States, indicated they had fallen victim to an AI voice cloning fraud themselves or knew someone who had.
Indians lost more than 50,000 rupees in AI voice related scams
The McAfee report also highlighted a startling 83 percent of Indians who fell victim to AI voice scams and reported financial losses, with over half (48 percent) of them suffering losses of more than 50,000 rupees.
Indians have fallen into the trap of voice cloning scams, with 86 percent of them sharing their voice data online or through recorded notes at least once a week (on social media, voice notes, etc.)
Indeed, people are now more sceptical about the validity of internet content as a result of the prevalence of deepfakes and fake news. The study continued to say that 27 percent of Indian adults no longer trust social media sites, and 43 percent are concerned about the spread of false or misleading information.