-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explain NetworkPolicy + service.type=LoadBalancer & Ingress behavior #1
Comments
I have the same issue first I denied all traffic:
then I gave access to all pods in the same namespace and in all ports
After this I can access to the service in pods inside the namespace But the loadBalancer is not reachable, It says OutOfService (it seems that the deny all affects something else) |
Any luck with solving this? I am facing same issue. |
Same here! Any joy? |
Looks like a lot of people are getting stuck with this and I don't think I have the answers. :) and I think the answer might be "depends on the implementation" as the spec doesn't clearly explain this. I don't think the spec even explains the port number of container vs the Service in front of it should be used. So I'm sorry but I don't know much to help here. Any help is appreciated. |
Isn't the issue here that by specifying:
in the 'from' section you are indicating you only wish to receive from pods and therefore not from external clients? It allows all pods in namespace nexus-test to receive traffic from all pods in the same namespace on any TCP port and denies inbound traffic to all pods in namespace nexus-test from other namespaces (and IP blocks). I wonder if instead you should use something like this:
which will:
Bearing in mind that https://kubernetes.io/docs/concepts/services-networking/network-policies/ states: "ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs." |
I think the issue here be because you are not specifying a namespace in the metadata section. So when you remove:
The rule is now a blocking rule: it denies inbound traffic to pods in the target namespace with labels app: bookstore and role: api. Conversely, with the 'from' section as you had it specified, the rule: allows pods in the target namespace with labels app: bookstore and role: api to receive traffic from pods in the same namespace with labels app: bookstore on all ports. However I don't believe this permits access from outside the cluster, so I think the following is required:
Which means the rule: Allows pods in namespace my-namespace with labels app: bookstore and role: api to receive traffic from subnet 0.0.0.0/0 on all ports. And allows pods in namespace my-namespace with labels app: bookstore and role: api to receive traffic from pods in the same namespace with labels app: bookstore on all ports. |
But this method would allow traffic from all namespace since we have specified 0.0.0.0/0 |
I have the same issue |
I thought so too, however it doesn't appear to be the case to me (GKE with DPv2). Specifying 0.0.0.0/0 allows external traffic over the LoadBalancer, but not traffic from Pods in the cluster unless you add a separate podSelector rule to cover them. If I specify:
This is enough to allow external traffic, without also directly allowing traffic from other Pods in the cluster. Not sure if this is a bug, or intentional behavior. Since the Service of type Loadbalancer is routing via the Node, the other way if you exclude the internal network from 0.0.0.0/0 is to specifically allow the Node CIDR range, like so:
But again, this actually doesn't seem needed in my testing. |
Update 07-allow-traffic-from-some-pods-in-another-namespace.md
When I apply a network policy,
Service.type=LoadBalancer
restricting all pod-to-pod traffic, it keeps working for a while, and a few minutes later it stops working.Once I remove network policy, it still keeps spinning and doesn't load in the browser (or via curl). Health checks seem fine though:
Repro:
kubectl run apiserver --image=nginx --labels app=bookstore,role=api --expose --port 80
kubectl expose deploy/apiserver --type=LoadBalancer --name=apiserver-external
from:
section.The text was updated successfully, but these errors were encountered: