-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_container_cluster: node_pool_defaults.node_config_defaults.insecure_kubelet_readonly_port_enabled not sent on create even when explicitly set by user #19520
Comments
hi @MaulinS-Pangea,
cpu_manager_policy is a required field ... ive set it to default |
Hi @MaulinS-Pangea You may check the |
Hi @ggtisc @rd-michel
I.e. I expect that the following Terraform Configuration resource "google_container_cluster" "cluster" {
...
node_pool_defaults {
node_config_defaults {
insecure_kubelet_readonly_port_enabled = "FALSE"
}
}
} will set resource "google_container_cluster" "cluster" {
...
node_pool_defaults {
node_config_defaults {
gcfs_config {
enabled = true
}
}
}
} would |
OK, looks like I narrowed down the issue :) to the wrong handling of default values for Existing cluster state: gcloud container clusters describe cluster \
--project=project \
--location=location \
--flatten=nodePoolDefaults
---
nodeConfigDefaults:
gcfsConfig:
enabled: true
then I try to disable insecure kubelet readonly port with the following Terraform Configuration resource "google_container_cluster" "cluster" {
...
node_pool_defaults {
node_config_defaults {
insecure_kubelet_readonly_port_enabled = "FALSE"
}
}
} and terraform shows no changes terraform apply
No changes. Your infrastructure matches the configuration. But if I explicitly enable insecure kubelet readonly port first resource "google_container_cluster" "cluster" {
...
node_pool_defaults {
node_config_defaults {
insecure_kubelet_readonly_port_enabled = "TRUE"
}
}
} terraform detects the change
and adds gcloud container clusters describe cluster \
--project=project \
--location=location \
--flatten=nodePoolDefaults
---
gcfsConfig:
enabled: true
nodeKubeletConfig:
insecureKubeletReadonlyPortEnabled: true And now I am able to actually disable insecure kubelet readonly port for new (not existing) node pools with resource "google_container_cluster" "cluster" {
...
node_pool_defaults {
node_config_defaults {
insecure_kubelet_readonly_port_enabled = "FALSE"
}
}
} terraform detects the change
and updates gcloud container clusters describe cluster \
--project=project \
--location=location \
--flatten=nodePoolDefaults
---
gcfsConfig:
enabled: true
nodeKubeletConfig:
insecureKubeletReadonlyPortEnabled: false |
And just in case, I confirmed my understanding of # I disabled readOnlyPort at cluster level via terraform with node_pool_defaults.node_config_defaults.insecure_kubelet_readonly_port_enabled: false
gcloud container clusters describe cluster \
--project=project \
--location=location \
--flatten=nodePoolDefaults
---
nodeConfigDefaults:
gcfsConfig:
enabled: true
nodeKubeletConfig:
insecureKubeletReadonlyPortEnabled: false
# this did not affect existing node pool
gcloud container node-pools describe existing-pool \
--project=project \
--cluster=cluster \
--location=location \
--flatten=config \
--format="value(kubeletConfig)"
nothing
# kubelet-config was not changed on the existing nodes too
gcloud compute instances describe existing-pool-gke-vm --zone zone --project project | grep readOnlyPort
readOnlyPort: 10255
# I forced existing nodes to restart via node pool upgrade
gcloud container clusters upgrade cluster \
--node-pool=existing-pool \
--project=project \
--location=location
All nodes in node pool [existing-pool] of cluster [cluster] will be upgraded from version [1.29.6-gke.1326000] to version [1.29.8-gke.1031000].
# readOnlyPort config is still missing in the node pool config after upgrade
gcloud container node-pools describe existing-pool \
--project=project \
--cluster=cluster \
--location=location \
--flatten=config \
--format="value(kubeletConfig)"
nothing
# kubelet-config was not changed on the nodes too
gcloud compute instances describe existing-pool-gke-vm --zone zone --project project | grep readOnlyPort
readOnlyPort: 10255
# I added the new test pool without explicitly disabling readOnlyPort
gcloud container node-pools create kubelet-test \
--project=project \
--cluster=cluster \
--location=location \
--num-nodes=1 \
--service-account '[email protected]'
# and newly added node pool inherited cluster node_pool_defaults.node_config_defaults settings as expected
gcloud container node-pools describe kubelet-test \
--project=project \
--cluster=cluster \
--location=location \
--flatten=config \
--format="value(kubeletConfig)"
insecureKubeletReadonlyPortEnabled=False
# readOnlyPort is disabled on the new node
gcloud compute instances describe kubelet-test-pool-gke-vm --project=project --zone=zone | grep readOnlyPort
readOnlyPort: 0
# I enabled readOnlyPort via terraform with node_pool_defaults.node_config_defaults.insecure_kubelet_readonly_port_enabled: true
gcloud container clusters describe cluster \
--project=project \
--location=location \
--flatten=nodePoolDefaults
---
nodeConfigDefaults:
gcfsConfig:
enabled: true
nodeKubeletConfig:
insecureKubeletReadonlyPortEnabled: true
# this did not affect existing node pool so I deleted it
gcloud container node-pools delete kubelet-test \
--project=project \
--cluster=cluster \
--location=location
# and added it again
gcloud container node-pools create kubelet-test \
--project=project \
--cluster=cluster \
--location=location \
--num-nodes=1 \
--service-account '[email protected]'
# and newly added node pool inherited cluster node_pool_defaults.node_config_defaults settings as expected again
gcloud container node-pools describe kubelet-test \
--project=project \
--cluster=cluster \
--location=location \
--flatten=config \
--format="value(kubeletConfig)"
insecureKubeletReadonlyPortEnabled=True
# readOnlyPort is enabled on the new node
gcloud compute instances describe new-kubelet-test-pool-gke-vm --project=project --zone=location | grep readOnlyPort
readOnlyPort: 10255 |
Documentation inconsistency It is not clear at all and causes confusion between users which is the correct way to set |
Curious whether: It's difficult to do perfectly because the API doesn't send a value if the value is suppressed. Either way, you may want to see if updating to 6.4.0 changes or fixes the behavior for you.
Yes, in my understanding, it affects the default behavior of newly created nodepools if they don't have the setting set or not. There's also the nested As @ggtisc says, the docs could probably be a little clearer in various spots. But it is also possible that there's an actual corner case somewhere with the behavior of Reading the top level Google docs on the various places this value can be set is probably a good way to double-check. For the most part, the attribute naming in the provider tracks with the Google APIs. The good thing is that the default will be changing soon for newly created clusters, and at that point, hopefully people won't need to set this setting. |
Side note: it's not now as of 6.4.0 (#19464) |
Here's a base test configresource "google_container_cluster" "with_insecure_kubelet_readonly_port_enabled_node_pool_update" { name = "tf-test-awelkralskddrlkjawer" location = "us-central1-f" initial_node_count = 1node_pool_defaults {
Previously we enabled "force sending" this field to ensure that it was sent even if false - but that caused #19428 and had to be rolled back, which has in turn caused this (slightly less bad) bug. A large part of the reason we implemented this field as a string (that gets converted to a boolean internally) is that it should allow us to actually distinguish between "unset" and "set to FALSE" - which means we should be able to reliably tell the difference and only force send the field in cases where it has been explicitly set to FALSE by the user. |
Community Note
Terraform Version & Provider Version(s)
Terraform v1.8.5
on darwin_arm64
Affected Resource(s)
Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/disable-kubelet-readonly-port#check-port-standard
As per GCP docs, to disable the kubelet read-only port at cluster level, the hierarchy is
nodePoolDefaults.nodeConfigDefaults.nodeKubeletConfig
. The terraform equivalent of this would benode_pool_defaults.node_config_defaults.insecure_kubelet_readonly_port_enabled
in agoogle_container_cluster
resource. The apply completes succesfully, but then if I runI get
The expected output should contain
insecureKubeletReadonlyPortEnabled: false
if the apply was successful and the port was disabled.I believe the setting is applicable for new clusters only. When I do this at
google_container_node_pool
level, it works. Either way, it would be nice to have a bit more clear documentation.Terraform Configuration
Debug Output
No response
Expected Behavior
No response
Actual Behavior
No response
Steps to reproduce
terraform apply
Important Factoids
No response
References
No response
b/369904303
The text was updated successfully, but these errors were encountered: