Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SELinux] K3s provisioning #1361

Closed
Tracked by #251
fgiudici opened this issue Apr 16, 2024 · 1 comment · Fixed by #1395
Closed
Tracked by #251

[SELinux] K3s provisioning #1361

fgiudici opened this issue Apr 16, 2024 · 1 comment · Fixed by #1395
Assignees
Labels
area/selinux kind/enhancement New feature or request
Milestone

Comments

@fgiudici
Copy link
Member

fgiudici commented Apr 16, 2024

Check requirements for successfu K3sl provisioning and operations in enforce mode. Check k3s-selinux module for targeted policy is enough.

@fgiudici fgiudici mentioned this issue Apr 16, 2024
8 tasks
@fgiudici fgiudici moved this to 🗳️ To Do in Elemental Apr 16, 2024
@kkaempf kkaempf added the kind/enhancement New feature or request label Apr 23, 2024
@kkaempf kkaempf added this to the Micro6.1 milestone Apr 23, 2024
@anmazzotti
Copy link
Contributor

anmazzotti commented May 1, 2024

k3s seems to be working just fine when enforcing SELinux.

Cluster config:

kind: Cluster
apiVersion: provisioning.cattle.io/v1
metadata:
  name: volcano
  namespace: fleet-default
spec:
  rkeConfig:
    machineGlobalConfig:
      selinux: true
      debug: true
  kubernetesVersion: v1.27.12+rke2r1

sestatus:

test-loopdev-c00616c7-e1e9-422c-a786-9bcd826cd8fd:~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

k3s systems nominal:

test-loopdev-c00616c7-e1e9-422c-a786-9bcd826cd8fd:~ # crictl ps --all
CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
9ea51ecb85e3c       a1862edffc506       5 minutes ago       Exited              upgrade                     0                   118b65b676460       apply-system-agent-upgrader-on-test-loopdev-c00616c7-e1e9-jwdrr
79653b1c2b524       abc5338582c4f       6 minutes ago       Running             system-upgrade-controller   0                   a733e42fa74df       system-upgrade-controller-78cfb99bb7-df7s4
2df55178b01bd       cf9285647d681       6 minutes ago       Running             fleet-agent                 0                   1472284d2c49b       fleet-agent-7f9ccfb8b-c2696
db81e9200355f       b49684953b3b9       6 minutes ago       Running             rancher-webhook             0                   3e976a7bc303b       rancher-webhook-7dc6679459-jcmv7
c63cae14fc3fc       e9ca1dba2cccd       6 minutes ago       Exited              proxy                       0                   b644eccbafb72       helm-operation-cmks8
60252d88df5f4       e9ca1dba2cccd       6 minutes ago       Exited              helm                        0                   b644eccbafb72       helm-operation-cmks8
3584bc778e95b       6a8d5ae6dd415       7 minutes ago       Running             cluster-register            0                   45eb50ec62881       cattle-cluster-agent-957455d66-kxhz8
72bba4d2d5c37       cc365cbb0397b       7 minutes ago       Running             traefik                     0                   a9df8c4c174bd       traefik-768bdcdcdd-wlhx9
5ccd4b73b4120       af74bd845c4a8       7 minutes ago       Running             lb-tcp-443                  0                   a837431866d81       svclb-traefik-8fb9930c-qrgj6
3e3f99488311e       af74bd845c4a8       7 minutes ago       Running             lb-tcp-80                   0                   a837431866d81       svclb-traefik-8fb9930c-qrgj6
c80fc51c78a46       5f89cb8137ccb       7 minutes ago       Exited              helm                        1                   3644b616dfb96       helm-install-traefik-nh6dg
296e49fe60fbf       817bbe3f2e517       7 minutes ago       Running             metrics-server              0                   4de83d24121e9       metrics-server-648b5df564-pvxm9
b8c0667d23bbe       5f89cb8137ccb       7 minutes ago       Exited              helm                        0                   08a39a9e144c2       helm-install-traefik-crd-r4jdg
bf15a6087dab4       b29384aeb4b13       7 minutes ago       Running             local-path-provisioner      0                   e40d547f3a9fe       local-path-provisioner-957fdf8bc-kzf85
0fb57bc9072ab       ead0a4a53df89       7 minutes ago       Running             coredns                     0                   3baeb5bc623d9       coredns-77ccd57875-gfwm6
test-loopdev-c00616c7-e1e9-422c-a786-9bcd826cd8fd:~ # kubectl get nodes -o wide
NAME                                                STATUS   ROLES                              AGE     VERSION        INTERNAL-IP       EXTERNAL-IP   OS-IMAGE              KERNEL-VERSION    CONTAINER-RUNTIME
test-loopdev-c00616c7-e1e9-422c-a786-9bcd826cd8fd   Ready    control-plane,etcd,master,worker   8m11s   v1.27.8+k3s2   192.168.122.134   <none>        openSUSE Tumbleweed   6.8.7-1-default   containerd://1.7.7-k3s1.27

containerd also shows enableSelinux flag on:
(full file
k3s-containerd-info.json
)

test-loopdev-c00616c7-e1e9-422c-a786-9bcd826cd8fd:~ # crictl info
.
.
.
    "enableSelinux": true,
.
.

@davidcassany davidcassany moved this from 🗳️ To Do to 🏃🏼‍♂️ In Progress in Elemental May 6, 2024
@davidcassany davidcassany self-assigned this May 6, 2024
@github-project-automation github-project-automation bot moved this from 🏃🏼‍♂️ In Progress to ✅ Done in Elemental May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/selinux kind/enhancement New feature or request
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants