-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added haproxy sidecar in its most basic form #39
base: main
Are you sure you want to change the base?
Conversation
Still a WIP
Fixed typo in configMapName Preserving NL at end of proxy-sidecar-config.yaml haproxy requires a NL at the EOF Updated VCL to only use a local_haproxy backend
Added defaults as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far looks good, left a few comments.
Also, do you plan on making a default haproxy config? We either need to do that or require users to reference a configmap with haproxy configs, also check if exists (or just fail if not?). Also not sure what would be a default config..
Makefile
Outdated
cp Dockerfile ./bundle/Dockerfile | ||
cp Dockerfile.* ./bundle/. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must have forgot to check in the budle Docker image. The line here is correct, it's the file that's missing. I'll create a PR with it.
api/v1alpha1/varnishcluster_types.go
Outdated
// +kubebuilder:validation:Enum=Always;Never;IfNotPresent | ||
ImagePullPolicy v1.PullPolicy `json:"imagePullPolicy,omitempty"` | ||
ImagePullSecret string `json:"imagePullSecret,omitempty"` | ||
Resources *v1.ResourceRequirements `json:"resources,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be safe to have it as not a pointer. That way you won't need to deal with nil checks and dereferences. Kubernetes also accepts value, not a pointer.
r.createVarnishSharedVolumeMount(false), | ||
r.createVarnishSettingsVolumeMount(true), | ||
r.createVarnishSecretVolumeMount(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
those, and other similar methods doesn't use anything from the reconcile struct should be safe to have them as structs.
I think the only way we can make a default haproxy.cfg is if we templatize the DNS name or server lines. I'm not sure how generic we can make this. For that example, our default config would basically proxy SSL traffic for us (which is all we want). As long as the user can provide w/e config they need on top of that, is my biggest concern. I'm also fine w/requiring a configmap name. We could still use the proxy as example in that case. :) |
Not sure if i should have used singletons here as not sure how they are viewed in golang. Could create the objects easily enough...
Tomash addressing in his PR by adding appropriate makefile. #42
May need to loosen script mounting to account for LUA scripts Added varnish-operator debugging document Toughened up the handling and usage of `.Values.replicas`
Adding metrics support to haproxy.cfg Added metrics port to haproxy container and service monitor
Updated BeforeSuite function based on ginko changes Added port def to service for haproxy metrics Confirmed metrics are making it to prometheus Imported dashboard is working
# Conflicts: # go.mod # go.sum # pkg/varnishcluster/controller/suite_test.go # tests/main_test.go
# Conflicts: # .github/workflows/e2e-tests.yml # api/v1alpha1/zz_generated.deepcopy.go # config/crd/bases/caching.ibm.com_varnishclusters.yaml # go.mod # go.sum # pkg/varnishcluster/controller/varnishcluster_controller.go # pkg/varnishcluster/controller/varnishcluster_statefulset.go # varnish-operator/crds/varnishcluster.yaml
Added support for `http-chk` array to the template Added `.BackendAdditionalFlags` to the template TODO: hook in flags in config/code
# Conflicts: # pkg/varnishcluster/controller/varnishcluster_statefulset.go # pkg/varnishcontroller/controller/controller.go
Fixed several issues with the haproxy template Added defaults to many of the haproxy config settings
Mounting configmap and comparing hashes doesn't work bc the actual configmap is updated way after the reconcile loop has completed in the varnishcluster controller Going to have to go back to a similar technique as used before but need to either add an init container or startup logic that ensure the haproxy.cfg file is in place before haproxy starts.
Responsible for creating the initial haproxy.cfg On change, the haproxy-sidecar compares the haproxy.cfg on disk with the configmap If they are different, haproxy is sent a SIGHUP which forces it to re-read its configuration It actually restarts the haproxy child process, so requests in flight during this time may be lost if not retried Added procps to varnishcluster controller image Created haproxy dockerfile so haproxy can run as a varnish user Required to be responsive to SIGHUPs sent from the varnishcluster controller container since it is running as a varnish user
Removed some debug log statements Added fix for e2e test failure (null HaproxySidecar object bc defaulting logic isn't invoked in test)
Still a WIP