-
Notifications
You must be signed in to change notification settings - Fork 14.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update readiness probe docs to match observed behaviour #49476
base: main
Are you sure you want to change the base?
Update readiness probe docs to match observed behaviour #49476
Conversation
Welcome @NovemberZulu! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify site configuration. |
/sig docs |
to stop. Note that`Ready` condition in the Pod status is still `"True"` | ||
and `.status.phase` is `"Running"` while the Pod is terminating. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd avoid putting “note that“ inside a node callout.
Try:
to stop. Note that`Ready` condition in the Pod status is still `"True"` | |
and `.status.phase` is `"Running"` while the Pod is terminating. | |
to stop. The`Ready` condition in the Pod status remains true, and `.status.phase` | |
remains Running, until the Pod is fully terminated. |
Which of these do you think is true @NovemberZulu:
- during Pod termination, the kubelet omits readiness probing, even when readiness probes are defined
- during Pod termination, the kubelet continues readiness probing when readiness probes are defined, but ignores the result and always marks the Pod as ready
- during Pod termination, the kubelet continues readiness probing when readiness probes are defined, but ignores the result and retains the Ready condition value that was in effect before termination began
- during Pod termination, the kubelet continues readiness probing when readiness probes are defined, but ignores the result and retains the Ready condition value that was in effect before termination began
- during Pod termination, the value Ready condition depends on whether the Pod uses Pod readiness+ and other factors not mentioned in the current docs
- none of the above are correct
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the PR, thank you for the suggestion!
Based on my current understanding of the code, I think the behaviour is:
- during Pod termination, the kubelet continues readiness probing when readiness probes are defined and marks Pod as either ready or not ready depending on the result, i.e. none of the above
to stop. The`Ready` condition in the Pod status remains true, and `.status.phase` | ||
remains Running, until the Pod is fully terminated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- What if there is a readiness probe defined? Is this sentence still true?
- What is an unready state if not having condition Ready set true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried it and the new sentence is false.
To re-simulate it, run the below.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 20"]
Result at around 18-20 seconds
kubectl describe pod test
...
Status: Running
...
Conditions:
Type Status
PodReadyToStartContainers False <
Initialized True
Ready False <
ContainersReady False <
PodScheduled True
An unready state occurs when a readiness probe fails, but the container is running. For example, if you use a path that doesn't exist in an HTTP Get probe, the pod will be in an unready state.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /doesnotexist
port: 80
initialDelaySeconds: 5
periodSeconds: 5
kubectl describe pod test
...
Status: Running
...
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False <
ContainersReady False <
PodScheduled True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me reword. Ready
condition doesn't turn unconditionally "True"
, but it doesn't turn unconditionally "False"
either
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is an unready state if not having condition Ready set true?
This is a good question. Based on "on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists" I assumed that Ready
condition will turn to "False"
when I delete the pod, but this is not the case. As discussed in in Termination of Pods, terminating pods don't receive traffic from LBs. I have an assumption that "pod is ready" <=> "pod accepts traffic", but it looks like this is not actually true. Should we remove the note altogether?
I reworded the note to talk about traffic, since arguably this was the initial intention. |
necessarily need a readiness probe; when the POD is deleted, the corresponding endpoint | ||
will have its `ready` status as `false`, so load balancers will not use it for regular | ||
traffic. The endpoint remains in the unready state while it waits for the containers | ||
in the Pod to stop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to add details about endpoint conditions.
into an unready state regardless of whether the readiness probe exists. | ||
The Pod remains in the unready state while it waits for the containers in the Pod | ||
to stop. | ||
necessarily need a readiness probe; when the POD is deleted, the corresponding endpoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Pod (not POD)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will fix
Another attempt :) |
Co-authored-by: Tim Bannister <[email protected]>
Description
This PR updates documentation about readiness probes to explicitly state that pods that are being terminating (i.e.
.metadata.deletionTimestamp
is set) still haveReady
condition"True"
and.status.Phase == "Running"
. I kept the exiting text, but looking athttps://github.com/kubernetes/kubernetes/blob/f64b651ebae643d422f4625161dc415970e2c166/pkg/kubelet/prober/prober_manager.go#L298 and https://github.com/kubernetes/kubernetes/blob/f64b651ebae643d422f4625161dc415970e2c166/pkg/kubelet/prober/prober_manager.go#L339
https://github.com/kubernetes/kubernetes/blob/f64b651ebae643d422f4625161dc415970e2c166/pkg/kubelet/prober/worker.go#L259
I'd say that
livenessREADINESS probes are processed the same way no matter is the pod is terminating or not. Please advice if we should rephrase the note more, or remove it entirely. Thank you!EDIT: Confused readiness and liveness probes