Computer scientists at the University of Waterloo have developed voice deepfake software that can fool the voice authentication system 99% of the time. All it takes to use this machine learning based voice cloning software is five minutes of audio recording of a human voice.
Image source: uk.pcmag.com
The study’s lead author, Andre Kassis, Ph.D. in Computer Security and Privacy at the University of Waterloo, explained how voice authentication works: “When registering via voice authentication, you will be asked to repeat a specific phrase with your voice. The system then extracts a unique voice signature (voice fingerprint) from this provided sentence and stores it on the server <..> If you attempt to authenticate in the future, you will be prompted to say a different phrase and the features extracted from it will be compared to the voice fingerprint stored in the system to determine if access should be granted.
Reportedly, even the anti-spoofing measures used by voice authentication systems fail to detect spoofing because a program by University of Waterloo geeks removes tags from deepfake audio “They state that it was generated by a computer”. After six attempts to bypass the authentication system, the scientists were 99% successful.
It’s no surprise to create a fake voice to trick a voice authentication system, but the software developed by computer scientists was so effective that Urs Hengartner, a professor of computer science at the University of Waterloo, expressed hope that companies looking to Set language authentication, like… The only authentication factor becomes “Will consider using additional or stronger authentication measures.”
Add Comment