Deepfake(3): Wav2lip Video Deepfaking

The function of the “Wav2lip” program is to process the music audio file and the video together so that the person in the video can lip-sync according to the music audio.

Just clone using Google Colab and run the program. Executing the “inference.py” program in the execution directory as shown below connects “DeepfakeTest01.mp4” and “DeepfakeTest01.wav” files (the execution time of the two files must be the same) and produces “result_voice.mp4”.

The work was carried out by removing the upload function. The uploading function was switched by connecting Google drive, and downloading the resulting file.

!cd /content/gdrive/MyDrive/Wav2lip && python inference.py –checkpoint_path checkpoints/wav2lip_gan.pth –face “/content/gdrive/MyDrive/Wav2lip/DeepfakeTest01.mp4” –audio “/content/gdrive/MyDrive/Wav2lip/DeepfakeTest01.wav”

It doesn’t work if the length of the file exceeds 5 minutes. So, I made a program that cuts the given file into 5-minute units and works. Two methods are possible. The first method is to work based on the uploaded file by attaching the voice to be dubbed to the original mp4 file (the voice of the original mp4 file is removed). After cutting the lecture video of the lecture video to the length of the audio file to be actually worked on, the audio and video are cut at 5-minute intervals before continuing with the first method.

The video below explains it in detail.

Leave a Reply

Your email address will not be published. Required fields are marked *