Many game console emulators allow users to extract and replace textures, while decompiled games also provide opportunities to upscale and clean up their visuals. However, the legal landscape surrounding these texture modifications is complex. Players are generally free to tinker with textures they've extracted from games they own. But sharing those enhanced assets publicly would likely infringe on the original copyright holders' intellectual property rights.
This is where the TexturePatch tool comes into play. It enables artists/enhancers/modders to create publicly shareable "texture patches." These patches can then be applied by players to their own extracted game textures, allowing them to enjoy improved visuals without the risk of copyright infringement.
Warning
While TexturePatch has progressed beyond a proof-of-conept tool in its technical area, it still requires critical review by other people on its methods. Currently, we can not guarantee patches are a safe format to be uploaded and whether it would be legal. We can only say that it is very likely safe and probably legal, unlike simply uploading modified versions of original textures as they are.
Note
For testing, be it successful or not; feel free to open an issue and let me know!
The idea of this tool is to share the updating values, never the original or final values of an image. Let's say a gray-scale image looks like this, where
And someone increased its saturation as follows:
What this tool will do (heavily simplified) is calculate its difference, and store it in a new image.
If we'd like to recreate our own modified image, we can simply calculate it as follows:
This way the modified image never has to be published (only the patch), reducing the risk at copyright infringement. We've made a few simplifications in this example though.
- Image dimensions may vary (i.e.,
$25\times25$ for the original and$300\times300$ for the modified one). Different dimensions are supported by the tool. In order to create a difference (patch), the eventual sizes must be the same, therefore the original image is resized to the modified image's dimension using a standard method (cubic interpolation). - The difference of two 8-bit images can only be faithfully stored using 9-bit
$(0-255, 255-0)$ , whereas the patch itself will be an 8 bit image. The tool adds sign descriptors at the bottom of the image and stores the absolute difference.
Given that the formula in the concept is only a difference, one could simply reverse-calculate the original without ever requiring legal access to it.
Similarly, when
To correctly reverse the image, it is now necessary to have the original image (or just the seed), making the point of reversing the image pointless. The noise will have sufficiently high variance (currently 96 luminance levels) so that the reversed luminances will be too hard to correct, and the texture will be unusable as a texture. Moreover, the size information of the original texture is not added to the patch. This approach has the advantage that no key has to be shared around to share patches, they are extracted from the images individually.
Again, simplifications are made. Now, the range is no longer 9 bit (to maximally represent $(-256, 255)$), since the noise can overflow it. With
Conceptually speaking, the difference between two similar images should be small, and thus the overall size of the image should also be smaller. However, when an image consists of random pixel values (i.e., "noise"), each of these pixels must be described individually, and thus the max potential compression ratio is smaller (concepts of image compression). Since this is the somewhat the case for a protected patch, its file size should be larger (even if the rows would be discarded).
More importantly, the differences between the original and modified image are the largest when there is a lot of local correction. These are often sharp edges of a logo, or drawn objects. These sharp images appear among texture packs more often than you'd expect. With photoshop tools, these edges can probably be extracted and used to repaint a new texture. Hence, the need arises to further obscure the visual space. On one hand, it is nice to recognize the images, especially for debugging, but on the other hand, it shouldn't be too easy to photoshop it.
Simple pattern swaps (i.e., rotating pixels on black tiles of the chess board) that don't require a seed will already make it hard for most people to do something practical with the patch or a reversed image, but these patterns are also easy to revert with scripting knowledge, since the algorithm can be inspected. (It definitelly won't help that we provide a filtering and filter-reversing tools ourselves!) Patterns can also incorporate noise to determine swap and/or shift positions, which is what we will end up doing.
The script can only be executed with python installed. It requires both open cv and numpy to be added to a default installation.
Help can be found for each command running main.py --help
and main.py create --help
and so on. Below, we'll give example commands for crate-brown-wood.jpg. (Be careful, paths in the example have png and jpg extensions!)
Running the following command will create a patch texture crate-brown-wood-patch.png given the paths to the original and modified textures. This method currently also works recursively on directories.
python main.py create ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patch.png
Filters can be passed, to further obfuscate the patch's shape. Some of them will require the original image as a seed, which will automatically be provided. For patches filtered using the seed, the original images are required to deobfuscate them before applying them. For now, they are omitted in the demo on reversing.
python main.py create ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patch.png --filters roll-h roll-v roll-v
Running the following command will apply the patch to the original texture and create crate-brown-wood-patch.png. This method currently also works recursively on directories.
python main.py apply ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-patch.png ./demo/crate-brown-wood-patched.png
And just like before, you can pass filters to deobfuscate the patches. The order that these filters get removed is reversed from what is passed through the command line. The filters themselves are inverted.
python main.py apply ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-patch.png ./demo/crate-brown-wood-patched.png --filters roll-h roll-v roll-v
A patch creator can ensure their patches will apply well -- matches exactly -- by running the following command, which supports directories. For two images, it will print the difference values (min, max)
, which in the case of the specific command below will print (0, 0)
since an pixel-wise comparison between the exact same images is always 0.
python main.py diff ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patched.png
Adding a third path will always generate a difference image, in which completely white represents no change, blue represents luminance decrease and red represents luminance increase. Since this is a one dimensional view on 3 channels, all channels (R, G, B) had been added up for comparison. The following command will additionally create a difference image crate-brown-wood-patch-{firstname}-{secondname}.png. This auto-naming behavior will change in the future.
python main.py diff ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patched.png ./demo/crate-brown-wood-difference-modified-patched.png
Finally, to see if the noise is large enough, the following command will create a reversed image crate-brown-wood-reversed.png with using 0 for each noise value -- since the original is presumed not to be accessible and thus unknown.
python main.py reverse ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patch.png ./demo/crate-brown-wood-reversed.png
You can pass in the this image to diff
to compare the differences with the "original" modified image.
To run all these commands for just two images, the following command will create all these textures at the location of the modified texture.
python main.py test ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-modified.png
It would be the equivalent of running (although some names will get a version number for now).
python main.py create ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patch.png
python main.py apply ./demo/crate-brown-wood.jpg ./demo/crate-brown-wood-patch.png ./demo/crate-brown-wood-patched.png
python main.py diff ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patched.png ./demo/crate-brown-wood-difference-modified-patched.png
python main.py reverse ./demo/crate-brown-wood-modified.png ./demo/crate-brown-wood-patch.png ./demo/crate-brown-wood-reversed.png
python main.py diff ./demo/crate-brown-wood-reversed.png ./demo/crate-brown-wood-patched.png ./demo/crate-brown-wood-difference-reversed-patched.png
To preview the effectiveness of a filter, one can apply them to a certain image. Some filters require a seed (image) to invert them, which is when --seed
must be provided for both applying and removing a filter. Providing no seed will use the first image as a seed.
python main.py test-filter ./demo/logo-patch.png ./demo/logo-patch-filtered.png filter1 filter2 filter3
To remove the filters, use the same list and pass --inverted
. Another way to achieve the same is to prepend the names with i
and reverse the order of the filters. Don't forget to pass the same --seed ./demo/logo-patch.png
, omitted below!
python main.py test-filter ./demo/logo-filtered.png ./demo/logo-patch-inverted.png --inverted filter1 filter2 filter3 # automatically convert to second command below
python main.py test-filter ./demo/logo-filtered.png ./demo/logo-patch-inverted.png ifilter3 ifilter2 ifilter1 # exactly the same
Finally, one can execute an arbitrary command line program on the images, as an optional pre/post-processing step. This may be useful to automatically upscaler/enhance/compress/... each image in a certain directory given input ([:original:]
) and output ([:processed:]
) placeholders. As an example, the command below will pass the images in ./textures
one by one as [:original:]
to cp
and its resulting images will be stored at ./textures-copy
one by one as [:processed:]
.
python main.py process "cp [:original:] [:processed:]" ./textures ./textures-copied # on a directory
# if ./textures contains only two images (foo.png and bar.png), the following is the same
cp ./textures/foo.png ./textures-copied/foo.png
cp ./textures/bar.png ./textures-copied/bar.png
One individual texture can be processed either, but it has no point really, as it can be written directly in the terminal without placeholders.
python main.py process "cp [:original:] [:processed:]" ./textures/foo.png ./textures/foo-copy.png # on a file
cp ./textures/foo.png ./textures/foo-copy.png # the same
The default placeholders can be overriden, if that is necessary, using --placeholder-input
, and --placholder-output
. Make sure the custom placeholders do not occur elsewhere in the template, because all occurrences will be filled in!
# example placeholders
python main.py process "cp iiii ((o))" ./textures/foo.png ./textures/foo-copy.png --input-placeholder iiii --output-placeholder "((o))"
cp ./textures/foo.png ./textures/foo-copy.png # the same
# BAD placeholder(s)
python main.py process "cp o p" ./textures/foo.png ./textures/foo-copy.png --input-placeholder "o" --output-placeholder "c"
c./textures/foo-copy.png ./textures/foo.png ./textures/foo-copy.png # the same
This also useful for end users that wish to further compress the images without dataloss, given that Open CV appears to increase the size of the image. Below, we demonstrate that an image loaded and written unmodified gets a larger file size.
>>> import cv2
>>> image = cv2.imread("./demo/logo.png", cv2.IMREAD_UNCHANGED) # 1.16 MB
>>> cv2.imwrite("./demo/logo-written.png", image) # suddenly 1.31 MB
True
Several compression tools exist for lossless (or lossy) compression, some even have python wrapped libraries. Some tools have a higher compression ratio at the expense of time, some prioritize time more. However, some players might not even want to waste this additional time on files that will look the same anyway. For this reason, we leave it up to the player to decide what additional tools should be run on the images. Below are a few tools, in no particular order, some of which are lossy.
We provide an example for pngcrush
, but generally, to know where to put the placeholders, just execute your-compressor, and the first line will usually tell you where "input" or "input image" is expected and "output". For pngcrush
, the basic command looks like pngcrush original.png processed.png
. We can execute this using our tool, or we can investigate the --help
command / look online for the best results according to your needs, which may be prioritizing the smallest size.
# default options for pngcrush
python main.py process "pngcrush [:original:] [:processed:]" ./demo/logo.png ./demo/logo.png-compressed-default
# options for pngcrush to run all 114 algorithms and pick best result (much slower!!)
python main.py process "pngcrush -rem allb -brute -reduce [:original:] [:processed:]" ./demo/logo.png ./demo/logo.png-compressed-brute`
With the default options, our logo was compressed in 17s to size 1.13MB. With the brute force method, it took 2m54s to only reduce it 1KB more (which is not worth the time and CPU wear). This doesn't mean of course that therefore other tools are only competing over a few bytes. Zopfli should have an additional reduction of 6% to gzip while pngcrush uses gzip, although no exe was readily available to test it out.
For those wondering, as of now, you can't run the tool's create
or apply
using process
, as it works on a predefined number (2) on paths, and those commands require three paths.
To demonstrate the results, we'll show two images that are patched and lastly demonstrate filters for further obfuscation.
We took a publically avaiable
$Original$ | $Modified$ | $Patch$ | $Patched$ | $Reversed$ | $Diff(M,P)$ | $Diff(M,R)$ |
---|---|---|---|---|---|---|
Notice that the original is a jpg, but the modified is a png. This is why the modified texture can still be faithfully recreated with the patch, unlike the next texture, which has a modified jpg texture.
We took a publically availabe (-19, 19)
for the difference between patched and modified.
$Original$ | $Modified$ | $Patch$ | $Patched$ | $Reversed$ | $Diff(M,P)$ | $Diff(M,R)$ |
---|---|---|---|---|---|---|
For this test, we took the logo from the Jak and Daxter wiki and downscaled it using cv2.resize
-- it looked too good already! (In all seriousness, old games probably have small images, whereas nowadays bigger is betterTM thus we should test for large upscales starting from small images.) Then we created a patch.
Now, applying filters to this patch separately. Once can exactly reverse this image. Notice how only the outline is visible. This is simply because the pixels in between have the same alpha channel, and they can subtracted from each other. In case these are not similar, information will be visible. For demo purposes, we divided the alpha channel by two, and added roll-h roll-v
. Since they can be reversed given the seed, they of course have 0 differences (white images).
patched | alpha-shifted | filtered (roll) | inversed | difference |
---|---|---|---|---|
We did not bother to additionally write an algorithm with --seed
that previously used, it will also look off.
In its current state, three packs have been created and applied successfully to textures on Windows, of which the last was cross-platform tested with Linux. (None of these tests included filters.)
- Mysterious Dash HD Textures
- Jak1 HD UI textures
- Jak1 ESRGAN Edition v1.0.1
- Snowover_Release_v0.1.4
- Mountain_Ash_Release_v0.1.4
- Meaty_Swamp_Release_v0.1.4
- JAK2_hd_hud
- Jak2-HD-Textures-For-OpenGOAL-main
So far, the created patches don't seem to have issues when crossing platforms. For the last (large) pack, one patch was created using Windows and was succesfully applied using Linux; another patch was created using Linux and succesfully applied using Windows. There is still an error on large white images that cause a buffer overflow on older versions of numpy, which still has to be addressed.
The tool has a minimal CLI that allows recursively creating patches for all PNGs and recursively applying patches. Given that the concept has been implemented to most extent and is practically useful, there isn 't much left planned. Here are a few things for the near future.
- Fix an outdated assertion that fails when the header row is bigger than the image size.
- Fix buffer overflow bug on bright images.
- Test whether issues arise when applying patches created on one platform (Windows) on images on another platform (Linux)
- Support directories for
diff
-
Support directories forreverse
,test
,test-filter
- Support
--filters
increate
,apply
- Check if paths exist instead of crashing
- Prevent duplicate path arguments where it's probably unintended
- Add an option to
--overwrite
and do not overwrite by default - Allow for the definition of a generic
process
command, so people can decide themselves what to do additionally (e.g., run a certain compression tool). - Also provide an example
process
command for one of the compression tools. - Explanation, summary (#files, list of directories) and confirm continue,
-y
/--yes
. - Setup process to automatically produce an exe that can run on Windows, Linux, Mac.
Other things I'd like to include, but won't actively plan on and may not do.
- Have the concept/tool get assessed/reviewed by others with any level of expertise (legal, image processing, image compression, decompiling, ...)
- Read default options from a json settings file in the current directory if exists, or pass
--options/default path
that can be overriden by cli arguments. - Investigate global keys to plug into image seed extraction, by default 0
- Investigate patch to patch.
- Update only updated textures or patch.
- Should also require noise depending on what is the update.
- Doesn't make sense if you'd have to chain multiple
apply
s, instead of doing it once.
- Investigate some automatic level of noise detection is required per image to reduce unnecessary noise size for patches with less edges.
- Investigate global keys; which can be suggested upon creation with some random number generator.
- Try out more packs created for other games than those in the Jak and Daxter series or supported by the OpenGOAL project.
- Can't someone publish these keys and a modification of a reversing algorithm to extract the original images that requires just the patch and the modified textures? Sure, but in any case, they can just as well upload the original or modified textures, which is much less of a hassle. What can be done to further protect the images is have someone define a few numbers globally that can be plugged in the key extraction process; this way, authors can still remove their patch (pack) when they are aware of keys being shared, recreate and reupload a pack with different global keys. In any case, just like people uploading original/modified textures, it is the intent of bad actors to do so, and they cannot be stopped.
- Are 16-bit png images supported? Yes, but no. The patching works, but it is not yet meant to be used. The original image has to be scaled (which is currently not the case!), otherwise, there will be very little difference visible (0-255) compared to (0-65535), and the patch can be just downsampled, creating a very good attempt at reversing the original.
- Are jpgs images supported? JPGs are a lossy format (that I personally wouldn't expect for games), thus to create a patch for them that results in the exact same jpg is unlikely. The tool creates a png patch, but upon storing the eventual patched image as jpg, some of the data gets lost. This would/should be the case as well when reading and directly writing the image -- yet to be tried. These differences are hardly visible to the eye.
- What are these iccp warnings? I'm not sure exactly (some corruption of the modified image), but the goal is to exactly recreate the image. If the modified image is corrupted before patched, then the eventual patched image will have it too.
-
Is this method safe? For now, I'd say yes. However, I strongly suspect this hashing is not quantum computing safe, as most contemporary encryption algorithms. Since "only"
$2^{32}-1$ keys exist, so for each image, one could just try them all at once with a quantum computer and determine which match the best with the modified image using an algorithm. But then again, it takes quite a lot of effort. This should only be a concern for the game company that 1) may want to publish patches (which is unlikely, because why not provide an update to do so instead), or 2) allows artists to publish patches on publically unavailable textures (but that in itself is already extremely unlikely). All the other people that have the goal of finding the original textures waste much less time searching for them online, sadly. - Can't the shapes of patches be used to recreate a texture I don't know, but let's say no. I have almost no photoshop skills. The craziest I could think of are people aligning screenshots of gameplay with shapes in patches, in which way they would have a name and eventually a texture they can modify or use as-is, but as usual, what would be the point. In case the textures hadn't been published before, they would presumably have to spend a lot of time figuring out which texture it is. I welcome anyone to attempts these extreme scenario's.