Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS default setup clusters are not benchmarked correctly (config path missing) #1636

Open
mmuth opened this issue Jun 27, 2024 · 0 comments · May be fixed by #1637
Open

EKS default setup clusters are not benchmarked correctly (config path missing) #1636

mmuth opened this issue Jun 27, 2024 · 0 comments · May be fixed by #1637

Comments

@mmuth
Copy link

mmuth commented Jun 27, 2024

Overview
I just upgraded my AWS EKS cluster to Kubernetes 1.29 - afterwards kube-bench reports 3 new findings that haven't been reported in 1.28:

[FAIL] 3.2.1 Ensure that the Anonymous Auth is Not Enabled (Automated)
[FAIL] 3.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[FAIL] 3.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)

How did you run kube-bench?

---
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-bench
spec:
  template:
    spec:
      hostPID: true
      containers:
        - name: kube-bench
          image: docker.io/aquasec/kube-bench:v0.7.3
          command:
            - "kube-bench"
            - "run"
            - "--benchmark"
            - "eks-1.2.0"
          volumeMounts:
            - name: var-lib-kubelet
              mountPath: /var/lib/kubelet
              readOnly: true
            - name: etc-systemd
              mountPath: /etc/systemd
              readOnly: true
            - name: etc-kubernetes
              mountPath: /etc/kubernetes
              readOnly: true
      restartPolicy: Never
      volumes:
        - name: var-lib-kubelet
          hostPath:
            path: "/var/lib/kubelet"
        - name: etc-systemd
          hostPath:
            path: "/etc/systemd"
        - name: etc-kubernetes
          hostPath:
            path: "/etc/kubernetes"

What happened?
It alerts although the configuration is correct (=>false positive).

What did you expect to happen:
It should return [PASS] for the checks mentioned above.

Environment

[What is your version of kube-bench? (run kube-bench version)]
kube-bench version: docker.io/aquasec/kube-bench:v0.7.3
Kubernetes version: AWS EKS 1.29, almost "default configuration"

Running processes

root        1989       1  2 Jun25 ?        00:55:11 /usr/bin/kubelet --cloud-provider=external --hostname-override=ip-10******.eu-central-1.compute.internal --config=/etc/kubernetes/kubelet/config.json --config-dir=/etc/kubernetes/kubelet/config.json.d --kubeconfig=/var/lib/kubelet/kubeconfig --image-credential-provider-bin-dir=/etc/eks/image-credential-provider --image-credential-provider-config=/etc/eks/image-credential-provider/config.json --node-ip=10.***** --node-labels=karpenter.sh/capacity-type=on-demand,karpenter.sh/nodepool=eks-nodes-default
nobody      2279    2059  0 Jun25 ?        00:02:39 /bin/node_exporter --path.procfs=/host/proc --path.sysfs=/host/sys --path.rootfs=/host/root --path.udev.data=/host/root/run/udev/data --web.listen-address=[0.0.0.0]:9100 --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/) --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
root        2291    2064  0 Jun25 ?        00:00:23 kube-proxy --v=2 --config=/var/lib/kube-proxy-config/config --hostname-override=ip-10-********.eu-central-1.compute.internal
root        3060    2933  0 Jun25 ?        00:00:00 /csi-node-driver-registrar --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock --v=2
ec2-user    5555    5081  0 Jun25 ?        00:01:46 /csi-provisioner --timeout=60s --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock --v=2 --feature-gates=Topology=true --extra-create-metadata --leader-election=true --default-fstype=ext4 --kube-api-qps=20 --kube-api-burst=100 --worker-threads=100
ec2-user    5691    5081  0 Jun25 ?        00:00:47 /csi-attacher --timeout=60s --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock --v=2 --leader-election=true --kube-api-qps=20 --kube-api-burst=100 --worker-threads=100
ec2-user    5750    5081  0 Jun25 ?        00:00:45 /csi-snapshotter --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock --leader-election=true --extra-create-metadata --kube-api-qps=20 --kube-api-burst=100 --worker-threads=100
ec2-user    5889    5081  0 Jun25 ?        00:00:49 /csi-resizer --timeout=60s --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock --v=2 --handle-volume-inuse-error=false --leader-election=true --kube-api-qps=20 --kube-api-burst=100 --workers=100
nobody     32132   31904  0 Jun25 ?        00:04:20 /kube-state-metrics --port=8080 --telemetry-port=8081 --port=8080 --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments

Configuration files

(almost skipping this one, as the problem is, that the file is not found), here is one snippet for the Findings above:

 "protectKernelDefaults": true,

Anything else you would like to add:
I debugged it by comparing the staging and prod clusters (compared old vs. new).
My resolution was that the kubelet config path has changed in the 1.29 version of AWS EKS and this path is not included in the list of kube-bench.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant