Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unraid / Docker #43

Closed
Norrox opened this issue Feb 4, 2019 · 94 comments
Closed

Unraid / Docker #43

Norrox opened this issue Feb 4, 2019 · 94 comments
Assignees
Labels

Comments

@Norrox
Copy link

Norrox commented Feb 4, 2019

Hello!
Is this something we can use with UnRaid / Docker ?

@Norrox Norrox added the bug label Feb 4, 2019
@Snawoot Snawoot added question and removed bug labels Feb 4, 2019
@Snawoot Snawoot self-assigned this Feb 4, 2019
@Snawoot
Copy link
Collaborator

Snawoot commented Feb 5, 2019

It's not tested, but most likely this patch can be used with any Linux system, where Nvidia Driver can be installed. Speaking of Docker, you probably will have to patch driver on a host system.

So, it worth trying.

@niXta1
Copy link
Contributor

niXta1 commented Feb 10, 2019

Can this help?
https://github.com/linuxserver/Unraid-Nvidia-Plugin

@pducharme
Copy link

Ok, i have PLEX Inc docker working on Unraid 6.7.0-rc3 with Unraid-Nvidia Plugin mentionned above. I also create a script to enable NVDEC on PLEX since it's not yet supported (found on PLEX forum). Now, I would like to patch, but I got this error

./patch.sh Detected nvidia driver version: 410.78 d244655474572t252462463462 /opt/nvidia/libnvidia-encode-backup/libnvidia-encode.so.410.78 ./patch.sh : line 120: /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.410.78: Read-Only File system

@Snawoot
Copy link
Collaborator

Snawoot commented Feb 10, 2019

@pducharme It looks like patch.sh successfully made backup before library edit, but failed to apply changes because /usr filesystem is read-only. I'm not familiar with docker, but I think if /usr actually belong to host system (on top which docker is runned), you should run patch on host. If /usr belongs to immutable docker image, you have to edit image or create overlay.

In other words, you have to make /usr writable.

But there is always another option:

  1. Modify script a little, get patched library at some new location where creating files possible.
  2. Use mount --bind inside container to overlap it over according file in /usr. Probably there is docker-way to make such mount persistent or use /etc/fstab

I know this response pretty vague, but I have no unraid/docker instance and can only guess.

@niXta1
Copy link
Contributor

niXta1 commented Feb 10, 2019

Ok, i have PLEX Inc docker working on Unraid 6.7.0-rc3 with Unraid-Nvidia Plugin mentionned above. I also create a script to enable NVDEC on PLEX since it's not yet supported (found on PLEX forum). Now, I would like to patch, but I got this error ...

@pducharme did you run this command on a container or on the host?

Did you try to do it inside the nvidia-plugin container? I think it should work there.

@Norrox
Copy link
Author

Norrox commented Feb 11, 2019

Just found this news! https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/

@Norrox
Copy link
Author

Norrox commented Feb 11, 2019

They does not want us to ask questions on the forum that can trigger nvidia to shutdown their driver plugin, so I guess we have to continue the discussion here :-)

@pducharme
Copy link

It worked when I ran it on the host. Confirmed with 5 concurrent transcode on my Plex server 😀

@niXta1
Copy link
Contributor

niXta1 commented Feb 11, 2019

Just found this news! https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/

That's the exact same plugin? Right? :)

@Snawoot Snawoot pinned this issue Feb 11, 2019
@Snawoot Snawoot unpinned this issue Feb 11, 2019
@Snawoot Snawoot pinned this issue Feb 11, 2019
@Norrox
Copy link
Author

Norrox commented Feb 12, 2019

@niXta1 sorry got alittle carried away :)

@rix1337
Copy link

rix1337 commented Feb 12, 2019

It worked when I ran it on the host. Confirmed with 5 concurrent transcode on my Plex server grinning

Nice! Have you set this up in the user scripts plugin?

@pducharme
Copy link

@rix1337 No, I just ran the patch.sh from the SSH on my Unraid host. Should I use this with User Scripts plugin? I thing it survive reboots, no?

@jimlei
Copy link

jimlei commented Feb 13, 2019

Anyone know of any overviews of how many sessions are actually supported by each card? Wondering if I should use a 980 or a 1060, my box is coloed so don't have too much time to fiddle around with it.

@Snawoot
Copy link
Collaborator

Snawoot commented Feb 13, 2019

Hi! See this link (also referenced on top page of this repo). 1060 has newer nvenc chip and supports more codec formats

@jimlei
Copy link

jimlei commented Feb 13, 2019

@Snawoot I was thinking of a list over unlocked cards to see how many streams they would handle ^^ But yeah 1060 would probably be a good choice based on the wider support alone.

@niXta1
Copy link
Contributor

niXta1 commented Feb 13, 2019

@Snawoot I was thinking of a list over unlocked cards to see how many streams they would handle ^^ But yeah 1060 would probably be a good choice based on the wider support alone.

@jimlei it’s hard to say unless you use the exact same video files. They handle between ~5-100 streams I guess.
H264 1080p 15000kbps ~15-30x depending on the card.
You can compare it to P2000. From maybe half of it to a few more. It all depends on codecs and bitrates.

@cjackson234
Copy link

It worked when I ran it on the host. Confirmed with 5 concurrent transcode on my Plex server 😀

How did you get it to install? I get

The kernel header file 'user/src/linux-4.19.20-unraid/include/linux/kernel.h' does not exist.

@Snawoot Snawoot unpinned this issue Jul 1, 2019
@usafle
Copy link

usafle commented Jul 10, 2019

So there is a new version out of unraid/nvidia and the patch now fails, stating that the driver is not supported. Anyone else seeing this issue?

I can't freaking remember where I dropped the patch.sh in on my Unraid system initially. Anyone know where the patch.sh file goes for the Nvidia drivers? I'm currently searching my entire system for "patch.sh" via Krusader docker but it's taking forever and a day....

@3urningChrome
Copy link

@usafle not sure where you put it, but I have it in 'settings -> user scripts' to run on array start.

@usafle
Copy link

usafle commented Jul 10, 2019

Thanks, but I'm looking for the actual location on the Unraid OS to put the 'patch.sh' file so I can run the command:

bash ./patch.sh

I forgot where the file goes.... sorry if I wasn't clear enough earlier... it was late, I was annoyed LoL

@niXta1
Copy link
Contributor

niXta1 commented Jul 10, 2019

Go to settings -> user scripts, create a new script, paste the content, and manage it from there.

@usafle
Copy link

usafle commented Jul 10, 2019

I already have the user script... I need to run the place the patch.sh file in the same directory of the nvidia drivers to patch them.... that's what I'm asking.. where the NVidia drivers are located to patch on an Unraid OS....

Or am I really not doing this correctly at all?

@niXta1
Copy link
Contributor

niXta1 commented Jul 10, 2019

You can run it from anywhere.

@usafle
Copy link

usafle commented Jul 10, 2019

Soon as I tried to run it from 'anywhere' on the OS, I got this:

./patch.sh: line 7: syntax error near unexpected token newline' ./patch.sh: line 7: '

When I did it the first time (and I wasn't paying attention to HOW I was doing it) I placed it next to the Nvidia drivers in Unraid. It worked.

@jjslegacy
Copy link

you need to copy the rawfile and that text only. Sounds like you have bad characters in the content

@cannonf0dder
Copy link

cannonf0dder commented Jul 10, 2019 via email

@Rooster237
Copy link

Rooster237 commented Jul 10, 2019

i have 6.7.2 and all is working fine , i use two scrips the first one unlocks the driver and the second one is supposed to have the plex docker to use the transcoder2 so it will use the nvidia card for encode and decode at least thats how i understand it

#!/bin/bash
# halt on any error for safety and proper pipe handling
set -euo pipefail ; # <- this semicolon and comment make options apply
# even when script is corrupt by CRLF line terminators (issue #75)
# empty line must follow this comment for immediate fail with CRLF newlines

backup_path="/opt/nvidia/libnvidia-encode-backup"
silent_flag=''
rollback_flag=''

print_usage() { printf '
SYNOPSIS
       patch.sh [OPTION]...
DESCRIPTION
       The patch for Nvidia drivers to increase encoder sessions
       -s    Silent mode (No output)
       -r    Rollback to original (Restore lib from backup)
       -h    Print this help message
'
}

while getopts 'rsh' flag; do
  case "${flag}" in
    r) rollback_flag='true' ;;
    s) silent_flag='true' ;;
    *) print_usage
       exit 1 ;;
  esac
done

if [[ $silent_flag ]]; then
    exec 1> /dev/null
fi

declare -A patch_list=(
    ["375.39"]='s/\x85\xC0\x89\xC5\x75\x18/\x29\xC0\x89\xC5\x90\x90/g'
    ["390.77"]='s/\x85\xC0\x89\xC5\x75\x18/\x29\xC0\x89\xC5\x90\x90/g'
    ["390.87"]='s/\x85\xC0\x89\xC5\x75\x18/\x29\xC0\x89\xC5\x90\x90/g'
    ["396.24"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["396.26"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["396.37"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g' #added info from https://github.com/keylase/nvidia-patch/issues/6#issuecomment-406895356
    # break nvenc.c:236,layout asm,step-mode,step,break *0x00007fff89f9ba45
    # libnvidia-encode.so @ 0x15a45; test->sub, jne->nop-nop-nop-nop-nop-nop
    ["396.54"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.48"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.57"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.73"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.78"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.79"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.93"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["410.104"]='s/\x85\xC0\x89\xC5\x0F\x85\x96\x00\x00\x00/\x29\xC0\x89\xC5\x90\x90\x90\x90\x90\x90/g'
    ["415.18"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x40\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["415.25"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x40\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["415.27"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x40\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["418.30"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x40\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["418.43"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x40\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["418.56"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x40\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["418.74"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x0f\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["430.09"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x0f\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
    ["430.14"]='s/\x00\x00\x00\x84\xc0\x0f\x84\x0f\xfd\xff\xff/\x00\x00\x00\x84\xc0\x90\x90\x90\x90\x90\x90/g'
)

declare -A object_list=(
    ["375.39"]='libnvidia-encode.so'
    ["390.77"]='libnvidia-encode.so'
    ["390.87"]='libnvidia-encode.so'
    ["396.24"]='libnvidia-encode.so'
    ["396.26"]='libnvidia-encode.so'
    ["396.37"]='libnvidia-encode.so'
    ["396.54"]='libnvidia-encode.so'
    ["410.48"]='libnvidia-encode.so'
    ["410.57"]='libnvidia-encode.so'
    ["410.73"]='libnvidia-encode.so'
    ["410.78"]='libnvidia-encode.so'
    ["410.79"]='libnvidia-encode.so'
    ["410.93"]='libnvidia-encode.so'
    ["410.104"]='libnvidia-encode.so'
    ["415.18"]='libnvcuvid.so'
    ["415.25"]='libnvcuvid.so'
    ["415.27"]='libnvcuvid.so'
    ["418.30"]='libnvcuvid.so'
    ["418.43"]='libnvcuvid.so'
    ["418.56"]='libnvcuvid.so'
    ["418.74"]='libnvcuvid.so'
    ["430.09"]='libnvcuvid.so'
    ["430.14"]='libnvcuvid.so'
)

NVIDIA_SMI="$(which nvidia-smi)"

if ! driver_version=$("$NVIDIA_SMI" --query-gpu=driver_version --format=csv,noheader,nounits | head -n 1) ; then
    echo 'Something went wrong. Check nvidia driver'
    exit 1;
fi

echo "Detected nvidia driver version: $driver_version"

if [[ ! "${patch_list[$driver_version]+isset}" || ! "${object_list[$driver_version]+isset}" ]]; then
    echo "Patch for this ($driver_version) nvidia driver not found." 1>&2
    echo "Available patches for: " 1>&2
    for drv in "${!patch_list[@]}"; do
        echo "$drv" 1>&2
    done
    exit 1;
fi

patch="${patch_list[$driver_version]}"
object="${object_list[$driver_version]}"

declare -a driver_locations=(
    '/usr/lib/x86_64-linux-gnu'
    '/usr/lib/x86_64-linux-gnu/nvidia/current/'
    '/usr/lib64'
    "/usr/lib/nvidia-${driver_version%%.*}"
)

dir_found=''
for driver_dir in "${driver_locations[@]}" ; do
    if [[ -e "$driver_dir/$object.$driver_version" ]]; then
        dir_found='true'
        break
    fi
done

[[ "$dir_found" ]] || { echo "ERROR: cannot detect driver directory"; exit 1; }

if [[ $rollback_flag ]]; then
    if [[ -f "$backup_path/$object.$driver_version" ]]; then
        cp -p "$backup_path/$object.$driver_version" \
           "$driver_dir/$object.$driver_version"
        echo "Restore from backup $object.$driver_version"
    else
        echo "Backup not found. Try to patch first."
        exit 1;
    fi
else
    if [[ ! -f "$backup_path/$object.$driver_version" ]]; then
        echo "Attention! Backup not found. Copy current $object to backup."
        mkdir -p "$backup_path"
        cp -p "$driver_dir/$object.$driver_version" \
           "$backup_path/$object.$driver_version"
    fi
    sha1sum "$backup_path/$object.$driver_version"
    sed "$patch" "$backup_path/$object.$driver_version" > \
      "$driver_dir/$object.$driver_version"
    sha1sum "$driver_dir/$object.$driver_version"
    ldconfig
    echo "Patched!"
fi

and

#!/bin/bash

############################### DISCLAIMER ################################
# This script now uses someone elses work!                                #
# Please visit https://github.com/revr3nd/plex-nvdec/                     #
# for the author of the new transcode wrapper, and show them your support!#
# Any issues using this script should be reported at:                     #
# https://gist.github.com/Xaero252/9f81593e4a5e6825c045686d685e2428       #
###########################################################################

# This is the download location for the raw script off github. If the location changes, change it here
plex_nvdec_url="https://raw.githubusercontent.com/revr3nd/plex-nvdec/master/plex-nvdec-patch.sh"
patch_container_path="/usr/lib/plexmediaserver/plex-nvdec-patch.sh"

# This should always return the name of the docker container running plex - assuming a single plex docker on the system.
con="$(docker ps --format "{{.Names}}" | grep -i plex)"

# Uncomment and change the variable below if you wish to edit which codecs are decoded:
#CODECS=("h264" "hevc" "mpeg2video" "mpeg4" "vc1" "vp8" "vp9")

# Turn the CODECS array into a string of arguments for the wrapper script:
if [ "$CODECS" ]; then
	codec_arguments=""
	for format in "${CODECS[@]}"; do
		codec_arguments+=" -c ${format}"
	done
fi

echo -n "<b>Applying hardware decode patch... </b><br/>"
	
# Grab the latest version of the plex-nvdec-patch.sh from github:
echo 'Downloading patch script...'
wget -qO- --show-progress --progress=bar:force:noscroll "${plex_nvdec_url}" | docker exec -i "$con"  /bin/sh -c "cat > ${patch_container_path}" 

# Make the patch script executable.
docker exec -i "$con" chmod +x "${patch_container_path}"

# Run the script, with arguments for codecs, if present.

if [ "$codec_arguments" ]; then
	docker exec -i "$con" /bin/sh -c "${patch_container_path}${codec_arguments}"
else
	docker exec -i "$con" /bin/sh -c "${patch_container_path}"
fi

@Rooster237
Copy link

Sorry that didnt copy in how i thought it would

@Snawoot
Copy link
Collaborator

Snawoot commented Jul 10, 2019

@Rooster237 edited

@Rooster237
Copy link

Rooster237 commented Jul 10, 2019

thanks that looks better . there are two scripts there the first one ends at the AND

@Snawoot
Copy link
Collaborator

Snawoot commented Jul 10, 2019

If wget is present on your docker container, you may try to download latest patch.sh directly with following command:

wget -O patch.sh "https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh"

otherwise if curl is present on your docker container, you may download patch.sh like this:

curl -o patch.sh -s "https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh"

@usafle
Copy link

usafle commented Jul 10, 2019

I'm truly sorry all. I was trying to place the actual PATCH.SH file into my UnRaid OS and then from a terminal window run the command
bash ./patch.sh

I didn't realize that simply deleting the old user script I had set up and then copying pasting the "new" patch.sh contents into a new user script would allow me to accomplish the same thing.

Hopefully I'm making sense as to what I was attempting to do incorrectly so someone else reading this will understand that you DO NOT have to do that.

I've copied/pasted the contents of the patch.sh into a new user script and all is working...

No idea where I got the notion that I had to put the actual patch.sh file "next to" the Nvidia driver on the Unraid OS....

I should really go back and delete all my posts so I don't look that stupid. LoL

@marshalleq
Copy link

Has the plex decoding side of this changed recently? I ask because I installed the patch into my unraid as above, (it works first time thankyou). But I did not install the second patch which above is said to be for decode. Nevertheless when I run nvidia-smi -s u (which I'm told displays the encode and decode activity), it DOES list decode, which would seem to mean the GPU is handling the decoding also. Any thoughts?

e.g.

gpu sm mem enc dec

Idx % % % %

0     0     0     0     0
0     4     1     0     0
0     3     2     9    14
0     7     7    28    34
0     9    10    39    47
0     8     9    38    43
0     8     9    35    41
0     8     9    35    43
0     8     9    37    43
0     8     9    38    44
0    10    11    47    53
0    10    12    49    55
0    10    12    48    53
0     9    10    42    48
0     9     9    38    44

@niXta1
Copy link
Contributor

niXta1 commented Dec 17, 2019

Has the plex decoding side of this changed recently? I ask because I installed the patch into my unraid as above, (it works first time thankyou). But I did not install the second patch which above is said to be for decode. Nevertheless when I run nvidia-smi -s u (which I'm told displays the encode and decode activity), it DOES list decode, which would seem to mean the GPU is handling the decoding also. Any thoughts?

Plex does native nvdec since recently

@marshalleq
Copy link

Great, so for completeness for the next reader, the second script above is no longer needed.

@froman753
Copy link

froman753 commented Feb 1, 2020

I had created a user script with this following that runs at Start Up of Array.
wget -O patch.sh "https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
bash ./patch.sh

This automatically grabs the latest patch script that has the newest drivers editions included so you don't have to manually copy and paste the latest script.

@usafle
Copy link

usafle commented Feb 1, 2020

That's great. So just copy/paste the above into a new user script and that's it?

Edit: Guess not because I did that, ran the script and got these errors (that I have no idea what they mean because I'm a NOOB!)


Will not apply HSTS. The HSTS database must be a regular and non-world-writable file.
ERROR: could not open HSTS store at '//.wget-hsts'. HSTS will be disabled.
--2020-02-01 15:48:53-- https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10693 (10K) [text/plain]
Saving to: 'patch.sh'

0K .......... 100% 14.0M=0.001s

2020-02-01 15:48:53 (14.0 MB/s) - 'patch.sh' saved [10693/10693]

--2020-02-01 15:48:53-- http://bash/
Resolving bash (bash)... failed: Name or service not known.
wget: unable to resolve host address 'bash'
--2020-02-01 15:48:54-- http://./patch.sh
Resolving . (.)... failed: Name or service not known.
wget: unable to resolve host address '.'
FINISHED --2020-02-01 15:48:54--
Total wall clock time: 1.1s
Downloaded: 1 files, 10K in 0.001s (14.0 MB/s)

@pducharme
Copy link

I had created a user script with this following that runs at Start Up of Array.
wget -O patch.sh "https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh" bash ./patch.sh

This automatically grabs the latest patch script that has the newest drivers editions included so you don't have to manually copy and paste the latest script.

Nice! good idea, instead of having to reupdate the user script at each new version!

@froman753
Copy link

froman753 commented Feb 1, 2020

That's great. So just copy/paste the above into a new user script and that's it?

Edit: Guess not because I did that, ran the script and got these errors (that I have no idea what they mean because I'm a NOOB!)


Will not apply HSTS. The HSTS database must be a regular and non-world-writable file.
ERROR: could not open HSTS store at '//.wget-hsts'. HSTS will be disabled.
--2020-02-01 15:48:53-- https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10693 (10K) [text/plain]
Saving to: 'patch.sh'

0K .......... 100% 14.0M=0.001s

2020-02-01 15:48:53 (14.0 MB/s) - 'patch.sh' saved [10693/10693]

--2020-02-01 15:48:53-- http://bash/
Resolving bash (bash)... failed: Name or service not known.
wget: unable to resolve host address 'bash'
--2020-02-01 15:48:54-- http://./patch.sh
Resolving . (.)... failed: Name or service not known.
wget: unable to resolve host address '.'
FINISHED --2020-02-01 15:48:54--
Total wall clock time: 1.1s
Downloaded: 1 files, 10K in 0.001s (14.0 MB/s)

Yup. That should handle it all automatically so you shouldn't have to worry about it when updating to new Unraid Nvidia. It appears you may have an issue executing the bash command. you could try with this script.

wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
chmod +x patch.sh
./patch.sh

EDIT: I see my issue was with my originial comment having the bash command on the same line. The bash ./patch.sh should be on a new line.

@usafle
Copy link

usafle commented Feb 1, 2020

EDIT: I see my issue was with my originial comment having the bash command on the same line. The bash ./patch.sh should be on a new line.

Tried that, and got this:

Script location: /tmp/user.scripts/tmpScripts/Newest Nvidia Patch/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/Newest Nvidia Patch/script: line 2: unexpected EOF while looking for matching `"'
/tmp/user.scripts/tmpScripts/Newest Nvidia Patch/script: line 4: syntax error: unexpected end of file

It appears you may have an issue executing the bash command. you could try with this script.

wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
chmod +x patch.sh
./patch.sh

Tried that and got this:


Script location: /tmp/user.scripts/tmpScripts/Newest Nvidia Patch/script
Note that closing this window will abort the execution of this script
Will not apply HSTS. The HSTS database must be a regular and non-world-writable file.
ERROR: could not open HSTS store at '//.wget-hsts'. HSTS will be disabled.
--2020-02-01 16:31:34-- https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10693 (10K) [text/plain]
Saving to: 'patch.sh.2'

0K .......... 100% 3.61M=0.003s

2020-02-01 16:31:34 (3.61 MB/s) - 'patch.sh.2' saved [10693/10693]

Detected nvidia driver version: 440.44
Backup exists and driver file differ from backup. Skipping patch.

Not sure what the initial errors are up top at the start of the script... but the end result looks like it works?

@Morphyous
Copy link

Morphyous commented Feb 17, 2020

I had created a user script with this following that runs at Start Up of Array.
wget -O patch.sh "https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh" bash ./patch.sh
This automatically grabs the latest patch script that has the newest drivers editions included so you don't have to manually copy and paste the latest script.

Nice! good idea, instead of having to reupdate the user script at each new version!

To simplify & sooth OCD, add --no-hsts

wget --no-hsts -O patch.sh "https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh"
bash ./patch.sh

Re: unneeded errors in logs just waste io/stor, Stop the NOISE.

Repository owner locked as resolved and limited conversation to collaborators Feb 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests