Lens Studio is a free virtual character creation program from Snapchat. You can create virtual characters and animation clips with various functions. By registering the created result as a Snap chat lens, you can chat with a real-time virtual character on Snapchat or participate in a meeting with a Zoom video conference Virtual Camera. The video below shows various results by utilizing the function.
Continue reading “Creating Virtual Avatars Using Lens Studio”
The most recommended Fakevoice technology is Coquit-AI. I downloaded this and tried running it.
First of all, I installed it to check the TTS function and voice synthesis function. I created a virtual environment Coqui without using Colab and implemented it on a local mac with git.
Continue reading “Deepfake(4): TTS Voice faking”
The function of the “Wav2lip” program is to process the music audio file and the video together so that the person in the video can lip-sync according to the music audio.
Just clone using Google Colab and run the program. Executing the “inference.py” program in the execution directory as shown below connects “DeepfakeTest01.mp4” and “DeepfakeTest01.wav” files (the execution time of the two files must be the same) and produces “result_voice.mp4”.
Continue reading “Deepfake(3): Wav2lip Video Deepfaking”
For Face-Morphing, clone using the github command in the following location.
Of course, use the command git clone https://github.com/Azmarie/Face-Morphing.
The required packages are:
Continue reading “Deepfake(2): MakeItTalk-Python deepfake image talk”
Well, you can make an image to lip-sync with a given wave file. Here are the steps.
https://github.com/yzhou359/MakeItTalk/blob/main/README.md is the place you can start. There are many ways to clone the programming. Using directly github cloning, Anaconda, or Google CoLab. I used the Colab. Continue reading “MakeItTalk-Python deepfake image talk (1)”