Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Create a stash-cuda container with 26.0 HW Transcoding #4914

Open
dLCarbonX opened this issue Jun 3, 2024 · 8 comments
Open

[Feature] Create a stash-cuda container with 26.0 HW Transcoding #4914

dLCarbonX opened this issue Jun 3, 2024 · 8 comments

Comments

@dLCarbonX
Copy link

For now, CUDA users seemingly need to clone the repo, run the make commands, and use a local docker container. This is totally expected in a normal production env for a sane sysadmin.

However, I propose a second docker container/tag that is cuda specific so it can be used accordingly for the lazy/watchtower users.

Something like stash-cuda:development, stash:v0.26.0-cuda, or stash:development-cuda. Also, I would accordingly propose a docker-compose.yml for the cuda build as well. Something like:

  stash:
    image: stash:v0.26.0-cuda
    container_name: stash
    hostname: stash
    restart: unless-stopped
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
      - STASH_STASH=/data/
      - STASH_GENERATED=/generated/
      - STASH_METADATA=/metadata/
      - STASH_CACHE=/cache/
      - STASH_PORT=9999
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./stash/config:/root/.stash
      - ./videos:/data
      - ./stash/metadata:/metadata
      - ./stash/cache:/cache
      - ./stash/generated:/generated

May also be worth linking to the nvidia container toolkit.

PS: If I am just missing something that already has this would love to be pointed that way. Also not sure how much of the nv stuff is needed in the compose

@WithoutPants
Copy link
Collaborator

The docker compose stuff can probably be added into the existing docker-compose file; commented out so that the user can adjust as needed.

I agree with the necessity of additional images. I'm not super confident with the docker side of things, so it's probably not something I can do myself, but I can assist with integration with the builds.

@DogmaDragon
Copy link
Collaborator

Related #4300

@dLCarbonX
Copy link
Author

dLCarbonX commented Jun 3, 2024

I'm not sure how many users would need the patch for extra transcodes pointed out to them on consumer grade cards, but that would be here if we wanted to add it as part of the dockerfile (if it isn't already)

The Dockerfile-Cuda worked a charm for me with the latest release, I would assume all people really need is just a tag in Docker with cuda support in addition to the docker-compose.yml. I assume this is already done being part of the ci folders.

Note: Version in compose has been deprecated as well. Made this if yall wanna use it.

# APPNICENAME=Stash
# APPDESCRIPTION=An organizer for your porn, written in Go
services:
  stash:
    image: stashapp/stash:latest
    ## For the cuda container, use the below image
    # image: stashapp/stash:latest-cuda
    container_name: stash
    restart: unless-stopped
    ## the container's port must be the same with the STASH_PORT in the environment section
    ports:
      - "9999:9999"
    ## Uncomment the below to enable the nvidia runtime. Requires the Nvidia Container Runtime
    ## https://github.com/NVIDIA/nvidia-container-toolkit

    # runtime: nvidia
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - capabilities: [gpu]
    
    ## If you intend to use stash's DLNA functionality uncomment the below network mode and comment out the above ports section
    # network_mode: host
    logging:
      driver: "json-file"
      options:
        max-file: "10"
        max-size: "2m"
    environment:
      - STASH_STASH=/data/
      - STASH_GENERATED=/generated/
      - STASH_METADATA=/metadata/
      - STASH_CACHE=/cache/
      ## Adjust below to change default port (9999)
      - STASH_PORT=9999
      ## Uncomment to enable GPU support by exposing features to the container
      # - NVIDIA_VISIBLE_DEVICES=all
      # - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
    volumes:
      - /etc/localtime:/etc/localtime:ro
      ## Adjust below paths (the left part) to your liking.
      ## E.g. you can change ./config:/root/.stash to ./stash:/root/.stash
      
      ## Keep configs, scrapers, and plugins here.
      - ./config:/root/.stash
      ## Point this at your collection.
      - ./data:/data
      ## This is where your stash's metadata lives
      - ./metadata:/metadata
      ## Any other cache content.
      - ./cache:/cache
      ## Where to store binary blob data (scene covers, images)
      - ./blobs:/blobs
      ## Where to store generated content (screenshots,previews,transcodes,sprites)
      - ./generated:/generated

@ipedrazas
Copy link
Contributor

Usually, the way we handle GPUs in compose is using profiles (https://docs.docker.com/compose/profiles/). Let me see if I can get my hands on the hardware needed to test this.

@dLCarbonX
Copy link
Author

Interesting way to go about that, never have thought about that. I basically just copied the same config I'd use for torch/plex.

@feederbox826
Copy link
Contributor

I'm working towards "universal" hwaccel images in 4300 as mentioned previously. CUDA is a whopping 10G image last I checked and we don't need cuda runtime and development libraries just for hwaccel

@zoidbergwoop
Copy link

Ahoy there,

The great thing about a single binary is that it's super easy to run, so it'll be easy to just use a container.

I'm using the nvidia/cuda:12.4.1-base-ubuntu22.04 container, this is 244MB.

I like this approach, as it's the "official" nVidia solution, so people can go argue with nVidia. Plus it's just running a binary. 🤷, and maintaining a custom container is tedious and a time vampire, IMHO.

Here's what I'm doing right now to run this in Docker with hwaccel (I was aiming to hopefully expand a little to get it to work elsewhere, like generate, but I digress).

This works for nVidia, so I've removed intel and other specific pieces, but this is easily installed as this is directly borrowed from the projects Dockerfile.

FROM nvidia/cuda:12.4.1-base-ubuntu22.04
RUN apt update && apt upgrade -y && apt install -y ca-certificates libvips-tools ffmpeg wget
RUN rm -rf /var/lib/apt/lists/*
COPY ./stash-linux /usr/bin/stash

RUN mkdir -p /usr/local/bin /patched-lib
RUN wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh -O /usr/local/bin/patch.sh
RUN wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/docker-entrypoint.sh -O /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/patch.sh /usr/local/bin/docker-entrypoint.sh /usr/bin/stash

ENV LANG C.UTF-8
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES=video,utility
ENV STASH_CONFIG_FILE=/root/.stash/config.yml
EXPOSE 9999
ENTRYPOINT ["docker-entrypoint.sh", "stash"]

And my docker-compose.yml, for the sake of brevity I've cut down all the other keys, this is based off the docker-compose.yml from the repo:

services:
  stash:
    build:
      context: .
      dockerfile: Dockerfile
    # this tells docker you have a GPU, without this, it doesn't work.
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]

Also, tip around the jankiness of nVidia, Docker and getting the juggling act to work - you can test that Docker is correctly working with nVidia by using this docker-compose.yml:

services:
  nvidiatest:
    image: nvidia/cuda:12.4.1-base-ubuntu22.04
    command: nvidia-smi
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

Then when you run docker-compose up, it'll show the output of nvidia-smi, which is super handy as it's a mess to get it to work.

$ docker-compose up
[+] Running 1/0
 ✔ Container nvidia-test-1  Created                                                                                                                                                                              0.0s
Attaching to test-1
test-1  | Mon Jun  3 10:54:46 2024
test-1  | +-----------------------------------------------------------------------------------------+
test-1  | | NVIDIA-SMI 555.42.02              Driver Version: 555.42.02      CUDA Version: 12.5     |
test-1  | |-----------------------------------------+------------------------+----------------------+

@NodudeWasTaken
Copy link
Contributor

I did attempt this once, anyone willing can try again :P
#4091

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants