A command-line tool and Python library to automatically generate new textures similar to a source image or photograph. It's useful in the context of computer graphics if you want to make variations on a theme or expand the size of an existing texture.
This software is powered by deep learning technology β using a combination of
convolution networks and example-based optimization to synthesize images. We're
building texturize
as the highest-quality open source library available!
The examples are available as notebooks, and you can run them directly in-browser thanks to Jupyter and Google Colab:
- Gravel β online demo and source notebook.
- Grass β online demo and source notebook.
These demo materials are released under the Creative Commons BY-NC-SA license, including the text, images and code.
Generate variations of any shape from a single texture.
Usage:
texturize remix SOURCE...
Examples:
texturize remix samples/grass.webp --size=720x360
texturize remix samples/gravel.png --size=512x512
from texturize import api, commands, io
# The input could be any PIL Image in RGB mode.
image = io.load_image_from_file("examples/dirt1.webp")
# Coarse-to-fine synthesis runs one octave at a time.
remix = commands.Remix(image)
for result in api.process_octaves(remix, size=(512,512), octaves=5):
pass
# The output can be saved in any PIL-supported format.
result.images[0].save("output.png")
Reproduce an original texture in the style of another.
Usage:
texturize remake TARGET [like] SOURCE
Examples:
texturize remake samples/grass1.webp like samples/grass2.webp
texturize remake samples/gravel1.png like samples/gravel2.png --weight 0.5
from texturize import api, commands
# The input could be any PIL Image in RGB mode.
target = io.load_image_from_file("examples/dirt1.webp")
source = io.load_image_from_file("examples/dirt2.webp")
# Only process one octave to retain photo-realistic output.
remake = commands.Remake(target, source)
for result in api.process_octaves(remake, size=(512,512), octaves=1):
pass
# The output can be saved in any PIL-supported format.
result.images[0].save("output.png")
Combine multiple textures together into one output.
Usage:
texturize mashup SOURCE...
Examples:
texturize mashup samples/grass1.webp samples/grass2.webp
texturize mashup samples/gravel1.png samples/gravel2.png
from texturize import api, commands
# The input could be any PIL Image in RGB mode.
sources = [
io.load_image_from_file("examples/dirt1.webp"),
io.load_image_from_file("examples/dirt2.webp"),
]
# Only process one octave to retain photo-realistic output.
mashup = commands.Mashup(sources)
for result in api.process_octaves(mashup, size=(512,512), octaves=5):
pass
# The output can be saved in any PIL-supported format.
result.images[0].save("output.png")
Increase the resolution or quality of a texture using another as an example.
Usage:
texturize enhance TARGET [with] SOURCE --zoom=ZOOM
Examples:
texturize enhance samples/grass1.webp with samples/grass2.webp --zoom=2
texturize enhance samples/gravel1.png with samples/gravel2.png --zoom=4
from texturize import api, commands
# The input could be any PIL Image in RGB mode.
target = io.load_image_from_file("examples/dirt1.webp")
source = io.load_image_from_file("examples/dirt2.webp")
# Only process one octave to retain photo-realistic output.
enhance = commands.Enhance(target, source, zoom=2)
for result in api.process_octaves(enhance, size=(512,512), octaves=2):
pass
# The output can be saved in any PIL-supported format.
result.images[0].save("output.png")
For details about the command-line usage of the tool, see the tool itself:
texturize --help
Here are the command-line options currently available, which apply to most of the commands above:
Options: SOURCE Path to source image to use as texture. -s WxH, --size=WxH Output resolution as WIDTHxHEIGHT. [default: 640x480] -o FILE, --output=FILE Filename for saving the result, includes format variables. [default: {command}_{source}{variation}.png] --weights=WEIGHTS Comma-separated list of blend weights. [default: 1.0] --zoom=ZOOM Integer zoom factor for enhancing. [default: 2] --variations=V Number of images to generate at same time. [default: 1] --seed=SEED Configure the random number generation. --mode=MODE Either "patch" or "gram" to manually specify critics. --octaves=O Number of octaves to process. Defaults to 5 for 512x512, or 4 for 256x256 equivalent pixel count. --quality=Q Quality for optimization, higher is better. [default: 5] --device=DEVICE Hardware to use, either "cpu" or "cuda". --precision=PRECISION Floating-point format to use, "float16" or "float32". --quiet Suppress any messages going to stdout. --verbose Display more information on stdout. -h, --help Show this message.
We suggest using Miniconda 3.x to
manage your Python environments. Once the conda
command-line size is installed on
your machine, there are setup scripts you can download directly from the repository:
# a) Use this if you have an *Nvidia GPU only*.
curl -s https://raw.githubusercontent.com/texturedesign/texturize/master/tasks/setup-cuda.yml -o setup.yml
# b) Fallback if you just want to run on CPU.
curl -s https://raw.githubusercontent.com/texturedesign/texturize/master/tasks/setup-cpu.yml -o setup.yml
Now you can create a fresh Conda environment for texture synthesis:
conda env create -n myenv -f setup.yml
conda activate myenv
NOTE: Any version of CUDA is suitable to run texturize
as long as PyTorch is
working. See the official PyTorch installation guide
for alternatives ways to install the pytorch
library.
Then, you can fetch the latest version of the library from the Python Package Index (PyPI) using the following command:
pip install texturize
Finally, you can check if everything worked by calling the command-line script:
texturize --help
You can use conda env remove -n myenv
to delete the virtual environment once you
are done.
If you're a developer and want to install the library locally, start by cloning the repository to your local disk:
git clone https://github.com/texturedesign/texturize.git
We also recommend using Miniconda 3.x
for development. You can set up a new virtual environment called myenv
by running
the following commands, depending whether you want to run on CPU or GPU (via CUDA).
For advanced setups like specifying which CUDA version to use, see the official
PyTorch installation guide.
cd texturize
# a) Use this if you have an *Nvidia GPU only*.
conda env create -n myenv -f tasks/setup-cuda.yml
# b) Fallback if you just want to run on CPU.
conda env create -n myenv -f tasks/setup-cpu.yml
Once the virtual environment is created, you can activate it and finish the setup of
texturize
with these commands:
conda activate myenv
poetry install
Finally, you can check if everything worked by calling the script:
texturize --help
Use conda env remove -n myenv
to remove the virtual environment once you are done.