Microsoft AI Can Impersonate Your Voice With Just 3 Seconds Of Audio

According to Microsoft, a new AI named VALL-E can impersonate your voice with just 3 seconds of audio and can also match the speaker’s “emotional range” and tempo, making it a highly accurate type of mimicry.

Microsoft AI Can Impersonate Your Voice With Just 3 Seconds Of Audio 1

Thanks to a disturbing new AI named VALL-E, your voice might be digitally cloned and used to impersonate you.

With just three seconds of audio, an artificial intelligence system has been developed that can replicate any human voice.

Then, it may be used to convert any written text into speech, allowing someone to utilise the tool to talk for you.

It’s also intended to match the speaker’s “emotional range” and tempo, making it a highly accurate type of mimicry.

[jetpack_subscription_form title="Subscribe to GreatGameIndia" subscribe_text="Enter your email address to subscribe to GGI and receive notifications of new posts by email."]
Microsoft AI Can Impersonate Your Voice With Just 3 Seconds Of Audio 2
Microsoft trained the AI model on 7000 hours of English language speech

Thank goodness, the public still cannot access the AI tool. According to Microsoft, the “neural codec language model” (pdf below) was trained on 60,000 hours of English-language speech.

Del, a videogame artist at Naughty Facebook, the company that made “Last of Us,” claimed that “[VALL-E] can synthesise super-high-quality text-to-speech from the same voice using a 3-second sample of real speech.

Even the sample data’s emotional range and aural surroundings can be replicated.

Del said that it might have an impact on audiobooks in the future. “At the moment, VALL-E can only read, not necessarily PERFORM with the emotional, tonal and pacing range of a voice actor. However, much of the audiobook industry relies on a lot of junior voice actor talent that will undoubtedly feel the brunt of this first.”

VALL-E has undoubtedly raised some eyebrows online. “This is terrifying thinking about scam callers getting their hands on this,” tweeted Kevin Nash.

Christina Kraus, another user, wrote: “What use does this even have except for scam and impersonation purposes? Why don’t we focus on AI where it actually helps humanity? Why are we getting AI image generators and voice imitation? That’s literally the last thing we need.”

However, the tool may be extremely helpful in a variety of situations. In order to keep communicating with the outside world, people who lose their capacity to speak—like the late Stephen Hawking, who was unable to speak due to Motor Neurone Disease—could use the AI system to recreate replicas of their own voices.

Read the report given below:

GreatGameIndia is being actively targeted by powerful forces who do not wish us to survive. Your contribution, however small help us keep afloat. We accept voluntary payment for the content available for free on this website via UPI, PayPal and Bitcoin.

Support GreatGameIndia
Neural-Codec-Language-Models

1 COMMENT

  1. What a ruse by TPTSB and their blinded by tech minions.
    Bedazzlement by ‘wow’, ‘ain’t it cool’ and ‘look what I can do’ tech has zip to do with helping the human race to a higher minded and more ethical practicing existence.
    So now WETHEPEOPLE are stuck with voice and visual illusions on the net and the tube-of-boobs with no way to discern truth/reality from spin, lies and psychological manipulations?

Leave a Reply