This a repository of my various experiments in using generative machine learning with Unity3D.
Much thanks to Keijiro Takashi for his open-source Pix2Pix implementation for Unity. My long-term goal is to figure out a way to share GPU memory between Unity and Python to enable zero-copy usage of machine learning algorithims without having to implement them using compute shaders (slower than CUDNN) or copying over CPU (needs 4 CPU<->GPU syncs). I have a prototype TensorRT integration in Unity: https://github.com/aman-tiwari/Unity-TensorRT. Hopefully once this is ready it can speed up this game.
- Style-transfer using ZMQ (tweet)
- Pix2Pix run on raw camera output (tweet)
- Pix2Pix run on raw camera output, clipped to object boundaries (tweet)
- Triplanar shading with textures generated by pix2pix (tweet)
- Hexplanar shading & better tools for "authoring" the pix2pix textures
- Style-transfer without ZMQ copy, combining with pix2pix shading.
- Lucid 3D style-transfer/3d deep-dream research