Before attempting any of this, please make sure you have read, understood and completed the installation instructions. If you are experiencing issues, please raise them in the faceswap-playground repository instead of the main repo.
So, you want to swap faces in pictures and videos? Well hold up, because first you gotta understand what this collection of scripts will do, how it does it and what it can't currently do.
The basic operation of this script is simple. It trains a machine learning model to recognize and transform two faces based on pictures. The machine learning model is our little "bot" that we're teaching to do the actual swapping and the pictures are the "training data" that we use to train it. Note that the bot is primarily processing faces. Other objects might not work.
So here's our plan. We want to create a reality where Donald Trump lost the presidency to Nic Cage; we have his inauguration video; let's replace Trump with Cage.
In order to accomplish this, the bot needs to learn to recognize both face A (Trump) and face B (Nic Cage). By default, the bot doesn't know what a Trump or a Nic Cage looks like. So we need to show it some pictures and let it guess which is which. So we need pictures of both of these faces first.
A possible source is Google, DuckDuckGo or Bing image search. There are scripts to download large amounts of images. Alternatively, if you have a lot of videos of the person you're looking for (like interviews, public speeches, movies), you can convert a video to still images/frames and use those.
Feel free to list your image sets in the faceswap-playground, or add more methods to this file.
So now we have a folder full of pictures of Trump and a separate folder of Nic Cage. Let's save them in our directory where we put the faceswap project. Example: ~/faceswap/photo/trump
and ~/faceswap/photo/cage
So here's a problem. We have a ton of pictures of both our subjects, but they're just pictures of them doing stuff or in an environment with other people. Their bodies are on there, they're on there with other people... It's a mess. We can only train our bot if the data we have is consistent and focusses on the subject we want to swap. This is where faceswap first comes in.
# To convert trump:
python faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump
# To convert cage:
python faceswap.py extract -i ~/faceswap/photo/cage -o ~/faceswap/data/cage
We specify our photo input directory and the output folder where our training data will be saved. The script will then try its best to recognize face landmarks, crop the image to that size, and save it to the output folder. Note: this script will make grabbing test data much easier, but it is not perfect. It will (incorrectly) detect multiple faces in some photos and does not recognize if the face is the person who we want to swap. Therefore: Always check your training data before you start training. The training data will influence how good your model will be at swapping.
The training process will take the longest, especially on CPU. We specify the folders where the two faces are, and where we will save our training model. It will start hammering the training data once you run the command. I personally really like to go by the preview and quit the processing once I'm happy with the results.
python faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/cage -m ~/faceswap/models/
# or -p to show a preview
python faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/cage -m ~/faceswap/models/ -p
If you use the preview feature, select the preview window and press Q to save your processed data and quit gracefully. Without the preview enabled, you might have to forcefully quit by hitting Ctrl+C to cancel the command. Note that it will save the model once it's gone through about 100 iterations, which can take quite a while. So make sure you save before stopping the process.
Now that we're happy with our trained model, we can convert our video. How does it work? Similarly to the extraction script, actually! The conversion script basically detects a face in a picture using the same algorithm, quickly crops the image to the right size, runs our bot on this cropped image of the face it has found, and then (crudely) pastes the processed face back into the picture.
Remember those initial pictures we had of Trump? Let's try swapping a face there. We will use that directory as our input directory, create a new folder where the output will be saved, and tell them which model to use.
python faceswap.py convert -i ~/faceswap/photo/trump/ -o ~/faceswap/output/ -m ~/faceswap/models/
It should now start swapping faces of all these pictures.
A video is just a series of pictures (frames). You can export a video to still frames using ffmpeg
for example. Below is an example command to process a video to frames.
ffmpeg -i /path/to/my/video.mp4 /path/to/output/video-frame-%d.png
If you then use the resulting directory with frames to faceswap, it will automatically go through all of those. And here's a command to stitch png frames to a single video again:
ffmpeg -i video-frame-%04d.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4
This guide is far from complete. Functionality may change over time, and new dependencies are added and removed as time goes on.
If you are experiencing issues, please raise them in the faceswap-playground repository instead of the main repo.