-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support data persistent: integrate with storageClass #28
Comments
This isn't a good first issue. It needs some experience on K8s and PV/PVC/StorageClass. |
etcd-operator is using Kind for CI, and Kind including a local path provider, so i think just use local-path provider from Kind is enough? $ k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 15m |
/assign @gdasson |
@ahrtr: GitHub didn't allow me to assign the following users: gdasson. Note that only etcd-io members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I think that it would be better to support both file system and block volume. We leave it to users to use what kind of storage, but we need to ensure the configuration is flexible. If users do not configure any storage, then we just use the container's default temporary storage; it will be useful for users to do quick evaluation or experiment in their test environment. Specifically, users should be able to configure If users configure a volume, then it's shared by all PODs. In that case, we need to create a sub directory to avoid conflict. See example below. Usually it's the use case of file system storage, i.e. NFS. The administrator needs to provision & manage the PVC/PV themselves.
If users configure a
For now, we can define a struct
and add a field
|
Please assign |
/assign @gdasson |
The goal is to ensure the etcd's data will not get lost once the VM or POD is rebooted.
So we need to support setting
VolumeClaimTemplate
for the statefulset, something like below,Note that it isn't etcd-operator's responsibility to provision / manage the CSI driver in production environment.
But we need to provision / manage a CSI driver for our test environment,
one possible solution that I can think of is openebs localpv.
or csi-driver-host-path for single node test environment.
or use hostPath directly, something like below,
I am open to other alternatives.
The text was updated successfully, but these errors were encountered: