-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop using system-level certificates directories (i.e. /etc/pki, /etc/ssl/certs, and /etc/ca-certificates directories) #1692
Comments
It sounds good. I don't know enough about this topic to really get my head around the suggestion though. At the moment you can have a single root CA, even managed outside the cluster, with the root CA published to the system CA certificate store on machines that care about such things. It sounds like this change still fits in OK with that single root CA approach. I want to check this is currently feasible and will stay feasible:
The model I have is that if Kubernetes is using a internal CA, and this proposal happens, it's almost the same layout as with an offline root and intermediate CAs, only now the Kubernetes-specific root CA lives in (eg) |
This use case would be impacted as part of this proposed change. With the change, any necessary external CA root certificates would need to be placed in the kubeadm-managed directory (e.g. /etc/kubernetes/ca-certificates/). Using the system-level directory isn't feasible unless kubeadm intends to get into the "distro mess". This is the problem being discussed on #1665 . |
If we do this, we'll start being in the business of managing root certificates for the controller-manager cloud provider client code. Is that something kubeadm or the kubernetes project wants to be responsible for? In addition, organisations that are using proxies may know how to run the distro appropriate commands to include their organisation root CA, but this would break again for kubernetes. Let's disambiguate a few things:
#1665 was only about the root CAs for clients of external services. Right now, this is controller manager talking to cloud providers. The out-of-tree cloud providers will eventually need this. In addition, single CA roots for the entire control plane are not desirable. kubeadm does the right thing by producing independent CAs for etcd, API server, front-proxy etc...
Distro |
i think we shouldn't as this falls outside of the "minimal viable cluster" case. kubeadm already manages quite a big number of certificates and we should avoid including more.
this can be a problem, yes. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
i revisited these two issues: currently we are mounting "some" folders in the CP component containers so they can use them. https://golang.org/src/crypto/x509/root_linux.go currently golang does not expose a way to return the folders / files it found (no public methods for that).
so the question here is how to find the paths that will be used for the construction of the root CA bundle and move it to a common location, while avoiding the distro mess and forking golang? instead of moving these path contents to a common kubeadm maintained location, we might as well just mount them directly. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
closing as per the discussion above and given nothing actionable was done in the past 5 years. |
I posted in slack the following suggestion; @neolit123 suggested we log an issue to track/discuss it here instead:
Issues #279 , #1367, and #1665 all share a common theme regarding the troublesome /etc/pki, /etc/ssl/certs, and /etc/ca-certificates directories.
I suggested in the sig-cluster-lifecycle slack of moving the ca certificates directory which kubeadm uses to something which kubeadm controls (e.g. /etc/kubernetes/ca-certificates/ or similar) instead of using the system controlled directory. Doing this would immediately allow all of these issues to have straightforward resolutions:
/assign @randomvariable @neolit123
/area security
/kind design
The text was updated successfully, but these errors were encountered: