Skip to content

BBQigniter/kube-vip-watcher

Repository files navigation

kube-vip-watcher

The script watches pods with/where annotation kubeVipBalanceIP: "true" is set. The services and pods also need an app-label set - for example: app: logstash-buffer

Additionally, the configured service(s) need an annotation kubeVipBalancePriority: "vkube-6, vkube-4, vkube-5" set, which holds, in order, the nodes where the VIP should be hosted. In the example above, this means the Kubernetes node vkube-6 is the primary node. If it's not reachable or the pod on that node has an issue, the watch-script will look for another pod on the next defined node and moves the VIP.

This particularily can be useful for pods where the services need kube-vip's externalTrafficPolicy: Local option configured. And for better loadbalancing to pods (exposed with multiple VIPs - ergo RR-DNS) running on different nodes.

kube-vip itself must be running with the flag svc_election set to true

Successfully tested with kube-vip v0.8.2 and kube-vip-cloud-provider up to v0.0.10 - so far I've seen no real problems.

Workload examples

see folder workload-examples

Prerequisites: Kubernetes Cluster with 3 nodes named vkube-4, vkube-5 and vkube-5

Logstash-Buffer

  • File: logstash-buffer-statefulset-example.yaml
  • Type: StatefulSet
  • Traffic-Policy: Local

Full example with a Logstash-config - A StatefulSet with 3 pods, each with one persistent-volume on one Kubernetes node. Create the partitions needed as configured in the corresponding yaml manifest-parts before applying

Echo-Server

  • Type: Deployment
  • Traffic-Policy: Local or Cluster (Switch it in the yaml by un-/commenting)

A simple echoserver example.

Known Issues

  • possibly a few test-cases are not covered
  • currently all logs are written to console, so if you send logs via syslog too and scrape logs of pods - logs might be duplicated
  • find better way to check if rebalancing is really needed :| - currently we patch the lease in some cases even though it's not really needed

Fixed Issues

  • This kube-vip/kube-vip#563 seems to be fixed - as noted before the latest tests with kube-vip v0.8.2 and kube-vip-cloud-provider v0.0.10 running on Kubernetes v1.29.7 are looking fine. Also there is no need anymore for setting the annotation kube-vip.io/loadbalancerIPs and spec.loadBalancerIP with the same VIP in services. Using only kube-vip.io/loadbalancerIPs works now.

Libraries for Logging and Locking

One of my first "modules", it works, but can certainly be done better.

Prerequisites

cplogging.py

If you want to use it as standalone plugin, following Pyhton modules are needed:

pip install coloredlogs

lockJob.py

If you want to use it you need the cplogging.py. Besides this plugin uses also a few settings from settings.py

Examples

Have a look at the example.py it's well documented

settings.py

Used for defining default log-values. They can be overridden in your main-program See example.py for more info

Additional Notes

Filebeat autodiscover-example

If you are using Filebeat for scraping the Kubernetes-pods, you might add something like this to your autodiscover-section, if you set logging to be in JSON-format.

...
autodiscover:
  providers:
  - type: kubernetes
    node: ${NODE_NAME}
    templates:
      ...
      # we scrape "ingress-nginx" logs and use the special available module
      - condition:
          equals:
            kubernetes.labels.app_kubernetes_io/name: "ingress-nginx"
        config:
          - module: nginx
            ingress_controller:
              input:
                type: container
                stream: stdout
                paths:
                  - /var/log/containers/*${data.kubernetes.container.id}.log
            error:
              input:
                type: container
                stream: stderr
                paths:
                  - /var/log/containers/*${data.kubernetes.container.id}.lo
      # this will make filebeat scrape the "kube-vip-watcher"-logs and decode the JSON-message
      - condition:
          equals:
            kubernetes.container.name: "kube-vip-watcher"
        config:
          - type: container
            paths:
              - "/var/log/containers/*${data.kubernetes.container.id}.log"
            processors:
              - decode_json_fields:
                  fields: ["message"]
                  max_depth: 3
                  target: ""
                  overwrite_keys: true
      # fallback "condition" - scrape everything else as normal container
      - condition.and:
          ...
          # we explicitly set a rule for "ingress-nginx"
          - not.equals:
              kubernetes.labels.app_kubernetes_io/name: "ingress-nginx"
          # we explicitly set a rule for kube-vip-watcher
          - not.equals:
              kubernetes.container.name: "kube-vip-watcher"
        config:
          - type: container
            paths:
              - "/var/log/containers/*${data.kubernetes.container.id}.log"
...

Changelog

  • v0.07 - initial release
  • v0.08 - 2023-02-20
    • added better logging for easier debugging
    • fixed restarting of pods via adding reconnect-handling inside the script
  • v0.09 - 2023-05-09
    • fixed balancing for multiple services pointing to same workload with different VIPs
    • improved logging output a little bit
  • v0.10 - 2023-05-16
    • added fix so it should also work with the new annotation
    • updated workload-examples to reflect notes from "Known Issues"
  • v0.11 - 2024-08-09
    • set the connection timeout to the kubernetes-api to 1800 seconds - this should fix unresponsive kube-vip-watcher pods
    • updated Dockerfile to use latest available Python image python:3.12.5-slim-bookworm

About

Small script for moving kube-vip service-VIPs to preferred nodes

Topics

Resources

License

Stars

Watchers

Forks