Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kerberos Authentification on AL 2023 doesn't work. - K8S 1.30 #823

Open
Sevenlive opened this issue Aug 13, 2024 · 3 comments
Open

Kerberos Authentification on AL 2023 doesn't work. - K8S 1.30 #823

Sevenlive opened this issue Aug 13, 2024 · 3 comments

Comments

@Sevenlive
Copy link

What happened: csi-driver-smb doesn't work on AL 2023 Nodes and doesnt spit out any (useful error). I installed a new Kubernetes Cluster on AWS with the Version 1.30. The Standard Image for 1.30 is AL 2023 as the old Version AL2 is deprecated. When using AL 2023, the Container doesn't mount the SMB Path properly. It shows the content, but when you try to cd into a directory on an SMB Share, it spits out either "Required key not available" or "sh: cd: can't cd to XXXXXX/: No error information". Which depends on the Container you are currently using.

When running AL2 Nodes this doesn't happen. I assume its some kind of SELinux or other container isolation stuff, but not sure how to debug it.

What you expected to happen:
Being able to get files from the Server and to it. Both when using AL 2023 and AL 2 using Kerberos.

How to reproduce it:

Spawn a cluster with two nodes, one with AL2 and one with AL2023. Create a secret with a Token and create a PVC and PV. Use the following mount options for the PV:

- dir_mode=0777
- file_mode=0777
- vers=3.0
- cruid=0
- sec=krb5
- user=XXXXXX (Windows User)

Spawn two Pods (for example the example nginx Pod from this Repo) with a NodeSelector, one for the Node with AL2 and one for the Node with AL2023.
Spawn two nodes one with AL 2023 and one with AL2. CD into the mounted root-dir. On AL2023 it should work and you should see the folders, but if you cd into a subfolder, you should get an error.

On AL2 you can do everything you want.

Anything else we need to know?:

We are using Kerberos and think the problem is kerberos related. The Directory is usable both on the Node as well as in the smb-container of the csi-driver-smb. Because both have the right ticket mounted in /var/lib/kubelet/kerberos. The Pods using the mount provided by csi-driver-smb doesn't have the ticket. But it's neither on AL2 or on AL2023 and it still works on AL2.

Environment:
image: registry.k8s.io/sig-storage/smbplugin:v1.15.0

  • CSI Driver version: 1.15.0
  • Kubernetes version (use kubectl version): v1.30.2-eks-db838b0
  • OS (e.g. from /etc/os-release):
    NAME="Amazon Linux"
    VERSION="2023"
    ID="amzn"
    ID_LIKE="fedora"
    VERSION_ID="2023"
    PLATFORM_ID="platform:al2023"
    PRETTY_NAME="Amazon Linux 2023.5.20240701"
  • Kernel (e.g. uname -a): Linux ip-10-32XXXXXXXXXXXX 6.1.94-99.176.amzn2023.x86_64 test: fix travis config #1 SMP PREEMPT_DYNAMIC Tue Jun 18 14:57:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: krb5-workstation cifs-utils cronie
  • Others: (klist)
    Ticket cache: FILE:/var/lib/kubelet/kerberos/krb5cc_0
    Default principal: windows_XXXX@XXXXXXXX
    Valid starting Expires Service principal
    08/13/24 09:52:54 08/13/24 19:52:54 krbtgt/[email protected]
    renew until 08/20/24 09:52:54
    08/13/24 09:52:55 08/13/24 19:52:54 cifs/[email protected]
    renew until 08/20/24 09:52:54
    08/13/24 10:00:24 08/13/24 19:52:54 cifs/sdfs0XX.XXX.XX@
    renew until 08/20/24 09:52:54

Ticket server: cifs/[email protected]

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 11, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 11, 2024
@Sevenlive
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants