Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Nvidia_drm.modeset=1 is not properly set, based on nvidia module names in /lib/modules/[your-kernel-version]/updates/dkms #144

Closed
Klusio19 opened this issue Nov 6, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@Klusio19
Copy link

Klusio19 commented Nov 6, 2023

Describe the bug
If nvidia modules in /lib/modules/[your-kernel-version]/updates/dkms are named different than nvidia for example I have (and there will be plenty of people which will have the same name) nvidia-current-drm.ko, nvidia-current-modeset.ko, nvidia-current-peermem.ko, nvidia-current-uvm.ko, nvidia-current.ko, which is basically nvidia-current, instead of nvidia
the nvidia_drm.modeset=1 will not be applied, thus you won't be able login to Wayland session, etc.

To check if nvidia_drm.modeset is loaded, I run:
sudo cat /sys/module/nvidia_drm/parameters/modeset. If the answer is 'Y' it is loaded. If it's 'N', it's not.

To Reproduce
Steps to reproduce the behavior:

  1. Run sudo envycontrol -s nvidia (or sudo envycontrol -s hybrid)
  2. Nvidia_drm is not loaded if you have different nvidia modules names as described earlier.

Expected behavior
Properly load nvidia_drm.

System Information:

  • Model: Lenovo Legion Y540 17"
  • Distro: Debian GNU/Linux trixie/sid x86_64
  • Kernel: 6.5.10-x64v3-xanmod1
  • DE/WM and Display Manager (if applicable): KDE Plasma
  • EnvyControl version: 3.3.0
  • Nvidia driver version: 525.125.06-2
  • lspci output:
00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 07)
00:04.0 Signal processing controller: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem (rev 07)
00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
00:14.3 Network controller: Intel Corporation Cannon Lake PCH CNVi WiFi (rev 10)
00:15.0 Serial bus controller: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 (rev 10)
00:15.1 Serial bus controller: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #1 (rev 10)
00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
00:17.0 SATA controller: Intel Corporation Cannon Lake Mobile PCH SATA AHCI Controller (rev 10)
00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0)
00:1d.5 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #14 (rev f0)
00:1e.0 Communication controller: Intel Corporation Cannon Lake PCH Serial IO UART Host Controller (rev 10)
00:1f.0 ISA bridge: Intel Corporation HM470 Chipset LPC/eSPI Controller (rev 10)
00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10)
00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
00:1f.5 Serial bus controller: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
01:00.0 VGA compatible controller: NVIDIA Corporation TU116M [GeForce GTX 1660 Ti Mobile] (rev a1)
06:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03)
07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

Additional context
To properly load nvidia_drm, (if you have different nvidia modules names like described earlier, in my case nvidia-current) change line generated by EnvyControl in file located in /etc/modprobe.d/nvidia.conf
from options nvidia-drm modeset=1
to options nvidia-current-drm modeset=1.
After that regenerate your initramfs image by running: sudo update-initramfs -u -k all and reboot. Every time you change mode via EnvyControl, it overrides that file, so you have to repeat that every time you need.

@Klusio19 Klusio19 added the bug Something isn't working label Nov 6, 2023
@bayasdev
Copy link
Owner

bayasdev commented Nov 7, 2023

@Klusio19 just checked my Ubuntu 23.10 with Nvidia 535.129.03 install and the kernel module is named nvidia-drm (not sure if this is a Debian specific caveat).

I'm not sure how we can reliably detect the appropriate name if we're into the following situation:

Integrated mode --> Hybrid/Nvidia mode

Since the Nvidia modules are not loaded in this mode.

However I can implement an opt-in CLI flag like --use-nvidia-current.

Let me know what you think.

@Klusio19
Copy link
Author

Klusio19 commented Nov 7, 2023

@bayasdev I found out about the Nvidia modules naming scheme here in the forum: (https://forums.debian.net/viewtopic.php?t=154344), the guy says it's because of DKMS. So I don't think it is Debian-specific, however it doesn't really matter here.

You are correct about the fact that in situations where switching from integrated -> hybrid/Nvidia there is no way to determine what the names of the Nvidia modules are. At the moment I can't think about anything better, than the solution provided by you. I think a simple CLI argument would be enough.

Also if you decide to implement that, a small update in the readme about that situation with Nvidia modules naming scheme would be helpful. But I'm sure you are aware of that 🙂

bayasdev added a commit that referenced this issue Nov 7, 2023
@bayasdev
Copy link
Owner

bayasdev commented Nov 7, 2023

@Klusio19 please test the new 3.3.1 release

@Klusio19
Copy link
Author

Klusio19 commented Nov 7, 2023

@Klusio19 please test the new 3.3.1 release

Uppgraded, using the flag works flawlessly, nvidia_drm module is properly loaded now!

@bayasdev bayasdev closed this as completed Nov 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants