We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug resultsCache is not used
To Reproduce Steps to reproduce the behavior:
Expected behavior resultsCache is used
Environment:
Screenshots, Promtail config, or terminal output
--- apiVersion: source.toolkit.fluxcd.io/v1 kind: HelmRepository metadata: name: grafana namespace: loki-core spec: interval: 24h url: https://grafana.github.io/helm-charts --- apiVersion: helm.toolkit.fluxcd.io/v2 kind: HelmRelease metadata: name: loki namespace: loki-core spec: interval: 30m install: remediation: retries: 3 chart: spec: chart: loki version: 6.27.0 sourceRef: kind: HelmRepository name: grafana values: deploymentMode: Distributed global: image: registry: "harbor.corp/dockerhub" .globalExtra: &globalExtra extraArgs: - "-config.expand-env=true" extraEnv: - name: S3_ACCESS_KEY_ID valueFrom: secretKeyRef: name: loki-s3 key: S3_ACCESS_KEY_ID - name: S3_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: loki-s3 key: S3_SECRET_ACCESS_KEY tolerations: - key: "role" operator: "Equal" value: "loki" effect: "NoSchedule" nodeSelector: role: loki gateway: enabled: false ingress: enabled: true ingressClassName: nginx annotations: cert-manager.io/cluster-issuer: "cluster-issuer" nginx.ingress.kubernetes.io/proxy-read-timeout: "600" nginx.ingress.kubernetes.io/proxy-send-timeout: "600" hosts: - loki-core-v2.corp serviceMonitor: enabled: true compactor: enabled: true replicas: 1 <<: *globalExtra persistence: enabled: true claims: - name: data size: 50Gi storageClass: yc-network-ssd resources: requests: memory: "2Gi" cpu: "800m" limits: memory: "2Gi" distributor: <<: *globalExtra replicas: 1 autoscaling: enabled: true minReplicas: 3 maxReplicas: 7 targetCPUUtilizationPercentage: 70 targetMemoryUtilizationPercentage: 70 resources: requests: memory: "260Mi" cpu: "300m" limits: memory: "260Mi" maxUnavailable: 1 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: [] preferredDuringSchedulingIgnoredDuringExecution: - weight: 99 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: distributor topologyKey: kubernetes.io/hostname - weight: 100 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: distributor topologyKey: topology.kubernetes.io/zone ingester: <<: *globalExtra replicas: 8 persistence: enabled: true claims: - name: data size: 50Gi storageClass: yc-network-ssd autoscaling: enabled: false resources: requests: memory: "5Gi" cpu: "600m" limits: memory: "10Gi" maxUnavailable: 1 zoneAwareReplication: zoneA: nodeSelector: topology.kubernetes.io/zone: ru-central1-a role: loki zoneB: nodeSelector: topology.kubernetes.io/zone: ru-central1-b role: loki zoneC: nodeSelector: topology.kubernetes.io/zone: ru-central1-d role: loki indexGateway: <<: *globalExtra replicas: 3 enabled: true persistence: enabled: true storageClass: yc-network-ssd resources: requests: memory: "2Gi" cpu: "500m" limits: memory: "4Gi" maxUnavailable: 1 querier: <<: *globalExtra autoscaling: enabled: true minReplicas: 7 maxReplicas: 15 targetCPUUtilizationPercentage: 70 targetMemoryUtilizationPercentage: 70 resources: requests: memory: "3Gi" cpu: "2500m" limits: memory: "5Gi" affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: [] preferredDuringSchedulingIgnoredDuringExecution: - weight: 99 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: querier topologyKey: kubernetes.io/hostname - weight: 100 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: querier topologyKey: topology.kubernetes.io/zone maxUnavailable: 1 queryFrontend: <<: *globalExtra autoscaling: enabled: true minReplicas: 2 maxReplicas: 4 targetCPUUtilizationPercentage: 60 targetMemoryUtilizationPercentage: 60 resources: requests: memory: "1024Mi" cpu: "100m" limits: memory: "1024Mi" affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: [] preferredDuringSchedulingIgnoredDuringExecution: - weight: 99 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: query-frontend topologyKey: kubernetes.io/hostname - weight: 100 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: query-frontend topologyKey: topology.kubernetes.io/zone maxUnavailable: 1 queryScheduler: <<: *globalExtra replicas: 1 loki: auth_enabled: false storage: type: s3 bucketNames: chunks: core-loki ruler: core-loki admin: core-loki s3: endpoint: https://storage.cloud.net:443/ region: ru-central1 bucketnames: core-loki secretAccessKey: "${S3_SECRET_ACCESS_KEY}" accessKeyId: "${S3_ACCESS_KEY_ID}" tsdb_shipper: shared_store: s3 active_index_directory: /var/loki/tsdb-index cache_location: /var/loki/tsdb-cache schemaConfig: configs: - from: "2020-01-01" store: tsdb object_store: s3 schema: v12 index: prefix: tsdb_index_ period: 24h distributor: ring: kvstore: store: memberlist ingester: lifecycler: ring: kvstore: store: memberlist replication_factor: 1 autoforget_unhealthy: true chunk_idle_period: 1h chunk_target_size: 1572864 max_chunk_age: 1h chunk_encoding: snappy server: grpc_server_max_recv_msg_size: 4194304 grpc_server_max_send_msg_size: 4194304 limits_config: allow_structured_metadata: false reject_old_samples: true reject_old_samples_max_age: 168h max_cache_freshness_per_query: 10m split_queries_by_interval: 15m retention_period: 30d max_global_streams_per_user: 0 ingestion_rate_mb: 30 query_timeout: 300s volume_enabled: true memcached: chunk_cache: enabled: true results_cache: enabled: true rulerConfig: remote_write: enabled: true clients: vmagent_local: url: http://vmagent-victoria-metrics-k8s-stack.victoria-metrics.svc:8429/api/v1/write?extra_label=cluster=infra ruler: enabled: true replicas: 1 <<: *globalExtra extraArgs: - "-config.expand-env=true" # loki chart set {{- define "loki.rulerStorageConfig" -}} from global .Values.loki.storage.s3 - "-ruler.storage.type=local" - "-ruler.storage.local.directory=/etc/loki/rules" storage: type: local local: directory: /etc/loki/rules persistence: enabled: true size: 10Gi ring: kvstore: store: memberlist directories: fake: rules.txt: | groups: - name: logs2metrics interval: 1m rules: - record: panic_count expr: | sum by (app,cluster) ( count_over_time({app=~".+",level!~"INFO|info|debug"}|="panic"[5m]) ) - record: logs_size_by_app expr: | sum by (app,cluster)( bytes_over_time({app=~".+"}[1m]) ) resultsCache: allocatedMemory: 4096 chunksCache: allocatedMemory: 12288 bloomCompactor: replicas: 0 bloomGateway: replicas: 0 backend: replicas: 0 read: replicas: 0 write: replicas: 0 singleBinary: replicas: 0 test: enabled: false lokiCanary: enabled: false monitoring: serviceMonitor: enabled: true rules: enabled: true
chunksCache:
chunk_store_config: chunk_cache_config: background: writeback_buffer: 500000 writeback_goroutines: 1 writeback_size_limit: 500MB default_validity: 0s memcached: batch_size: 4 parallelism: 5 memcached_client: addresses: dnssrvnoa+_memcached-client._tcp.loki-chunks-cache.loki-core.svc consistent_hash: true max_idle_conns: 72 timeout: 2000ms
and resultsCache:
results_cache: cache: background: writeback_buffer: 500000 writeback_goroutines: 1 writeback_size_limit: 500MB default_validity: 12h memcached_client: addresses: dnssrvnoa+_memcached-client._tcp.loki-results-cache.loki-core.svc consistent_hash: true timeout: 500ms update_interval: 1m
Why chunksCache use memcached, but resultsCache use memcached_client? Who writes in resultsCache? How can I enable read-write logs in memcached?
The text was updated successfully, but these errors were encountered:
memcached exist in chunk_store_config
memcached not exist in query_range
Sorry, something went wrong.
@patsevanton Having same issue, thank you for opening this. Also nice dashboarding. Mind sharing where you got it from or the JSON? Thanks
@earimont-ib I think this dashboard is a simple memcached dashboard from the Internet.
@patsevanton Found it, thank you.
No branches or pull requests
Describe the bug
resultsCache is not used
To Reproduce
Steps to reproduce the behavior:
Expected behavior
resultsCache is used
Environment:
Screenshots, Promtail config, or terminal output
chunksCache:
and resultsCache:
Why chunksCache use memcached, but resultsCache use memcached_client?
Who writes in resultsCache?
How can I enable read-write logs in memcached?
The text was updated successfully, but these errors were encountered: