Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delay in SecurityPolicy change propagation for HTTPRoute when using targetSelectors #4278

Closed
luvk1412 opened this issue Sep 18, 2024 · 3 comments · Fixed by #4279
Closed
Assignees
Labels
kind/bug Something isn't working provider/kubernetes Issues related to the Kubernetes provider
Milestone

Comments

@luvk1412
Copy link
Contributor

I am facing a delay in propagation of security policy when I change from one security policy to another security policy in a situation when security policy is applied to http-routes using labels.

I have tried few things to narrow down the cases in which it is happening:
Suppose i have following resources :

  • sp-1 having targetSelectors for HTTPRoute as sp: sp-1
  • sp-2 having targetSelectors for HTTPRoute as sp: sp-2
  • HTTPRoute : route-1

Then for above:

  • If route-1 has sp-1 applied via labels: sp: sp-1 and now if I change the labels to sp: sp-2 (to apply sp-2 to route-1), then it takes considerable amount of time for this change to propagate. I am verifying if changes are propagated or not via egctl c envoy-proxy route
  • if route-1 has sp-1 applied via labels: sp: sp-1, and if i make some change in sp-1 itself and apply, the policy change is immediately propagated in route as well.
  • if route-1 has sp-1 applied via targetRefs for route-1 in sp-1, then making change in policy or shifting targetRefs to sp-2, both changes immediately propagate.

So basically only in the first case, where policy is applied to a route via targetSelectors and we change from one predefined security policy to another predefined one, i see a delay in propagation and delay can be of minutes also. I want to know if this delay is expected and if yes is there a way to reduce it.

This is reproducible on my local using latest dev eg version.

@arkodg
Copy link
Contributor

arkodg commented Sep 18, 2024

thanks for finding this @luvk1412 !
looks like the current predicate only takes ObservedGeneration into consideration

// Watch HTTPRoute CRUDs and process affected Gateways.

we probably also need to reconcile when the route labels change

@arkodg arkodg added help wanted Extra attention is needed provider/kubernetes Issues related to the Kubernetes provider and removed triage labels Sep 18, 2024
@arkodg arkodg added this to the v1.2.0-rc1 milestone Sep 18, 2024
@arkodg
Copy link
Contributor

arkodg commented Sep 18, 2024

Seems like a simpler fix, need to add https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/predicate#LabelChangedPredicate, @luvk1412 interested in taking a stab at it ?

@luvk1412
Copy link
Contributor Author

@arkodg sure why not, can give this a try. you can assign to me.

@arkodg arkodg added kind/bug Something isn't working and removed help wanted Extra attention is needed labels Sep 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working provider/kubernetes Issues related to the Kubernetes provider
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants