Mitigation Strategy: SASL/Kerberos Authentication
-
Mitigation Strategy: Implement Kerberos authentication for all Kafka clients and brokers.
-
Description:
- Kerberos Setup (if not existing): If you don't have a Kerberos infrastructure, you'll need to set up a Key Distribution Center (KDC).
- Principals and Keytabs: Create Kerberos principals for each Kafka broker and client. Generate keytab files for each principal.
- Kafka Broker Configuration:
security.inter.broker.protocol=SASL_PLAINTEXT
orsecurity.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.kerberos.service.name=kafka
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.keytab
(path to broker keytab)sasl.kerberos.principal
(broker principal)
- Kafka Client Configuration:
security.protocol=SASL_PLAINTEXT
orsecurity.protocol=SASL_SSL
sasl.mechanism=GSSAPI
- JAAS configuration file specifying client keytab and principal.
- Testing: Verify authentication from clients.
- Keytab Rotation: Implement a keytab rotation process.
-
Threats Mitigated:
- Unauthorized Access (High Severity): Prevents unauthorized clients from connecting.
- Man-in-the-Middle (MitM) Attacks (High Severity): (With TLS) Prevents impersonation.
- Replay Attacks (Medium Severity): Kerberos prevents replay attacks.
-
Impact:
- Unauthorized Access: Risk reduced significantly (High to Low).
- Man-in-the-Middle Attacks: Risk reduced significantly (High to Low, with TLS).
- Replay Attacks: Risk reduced significantly (Medium to Low).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]
Mitigation Strategy: TLS/SSL Encryption (Kafka Configuration)
-
Mitigation Strategy: Enable TLS/SSL encryption for Kafka communication (client-to-broker and inter-broker) using Kafka's configuration settings.
-
Description:
- Certificates: Obtain or create TLS certificates.
- Kafka Broker Configuration:
listeners=PLAINTEXT://:9092,SSL://:9093
security.inter.broker.protocol=SSL
ssl.keystore.location
,ssl.keystore.password
,ssl.key.password
ssl.truststore.location
,ssl.truststore.password
(for client auth or custom CA)ssl.client.auth=required
(mTLS),ssl.client.auth=requested
, orssl.client.auth=none
- Kafka Client Configuration:
security.protocol=SSL
ssl.truststore.location
,ssl.truststore.password
- For mTLS:
ssl.keystore.location
,ssl.keystore.password
,ssl.key.password
- Testing: Verify secure connections.
- Renewal: Implement certificate renewal.
-
Threats Mitigated:
- Eavesdropping (High Severity): Prevents data interception.
- Man-in-the-Middle (MitM) Attacks (High Severity): Prevents impersonation (with authentication).
- Data Tampering (High Severity): Ensures data integrity in transit.
-
Impact:
- Eavesdropping: Risk reduced significantly (High to Low).
- Man-in-the-Middle Attacks: Risk reduced significantly (High to Low, with authentication).
- Data Tampering: Risk reduced significantly (High to Low).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]
Mitigation Strategy: Access Control Lists (ACLs)
-
Mitigation Strategy: Implement granular ACLs using Kafka's built-in authorization mechanism.
-
Description:
- Identify Resources and Principals: Determine resources (topics, groups) and principals (users/groups).
- Define Permissions: Assign specific permissions (Read, Write, Create, etc.) to each principal for each resource.
kafka-acls
Tool: Use thekafka-acls
command to manage ACLs:kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation Read --topic my-topic
- Enable ACL Authorization: Set
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
in the broker configuration. - Testing: Verify ACL enforcement.
- Review: Regularly review and update ACLs.
-
Threats Mitigated:
- Unauthorized Data Access (High Severity): Controls read/write access to topics.
- Unauthorized Topic Creation/Deletion (Medium Severity): Controls topic management.
- Unauthorized Consumer Group Operations (Medium Severity): Controls group access.
- Privilege Escalation (High Severity): Limits the impact of compromised accounts.
-
Impact:
- Unauthorized Data Access: Risk reduced significantly (High to Low).
- Unauthorized Topic Creation/Deletion: Risk reduced significantly (Medium to Low).
- Unauthorized Consumer Group Operations: Risk reduced significantly (Medium to Low).
- Privilege Escalation: Risk reduced significantly (High to Low/Medium).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]
Mitigation Strategy: Quotas
-
Mitigation Strategy: Implement Kafka quotas to limit client resource consumption.
-
Description:
- Quota Types: Choose quota types: produce, fetch, or request quotas.
- Define Limits: Set limits (bytes/second or requests/second) for users, clients, or IPs.
- Configure Quotas (Dynamic): Use
kafka-configs
or ZooKeeper:kafka-configs --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1048576' --entity-type users --entity-name user1
- Monitoring: Monitor quota usage.
- Adjustment: Adjust limits as needed.
-
Threats Mitigated:
- Denial of Service (DoS) (High Severity): Prevents resource exhaustion by clients.
- Resource Exhaustion (Medium Severity): Protects cluster resources.
-
Impact:
- Denial of Service (DoS): Risk reduced significantly (High to Low/Medium).
- Resource Exhaustion: Risk reduced significantly (Medium to Low).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]
Mitigation Strategy: Kafka-Provided Deserializers (and Schema Registry)
-
Mitigation Strategy: Use Kafka's built-in, safe deserializers (Avro, Protobuf, String, etc.) and, ideally, a schema registry. This is directly related to how Kafka handles data.
-
Description:
- Avoid Generic Deserializers: Never use
java.io.ObjectInputStream
. - Choose Specific Deserializers: Use Kafka's provided deserializers:
org.apache.kafka.common.serialization.StringDeserializer
org.apache.kafka.common.serialization.ByteArrayDeserializer
org.apache.kafka.common.serialization.IntegerDeserializer
org.apache.kafka.common.serialization.LongDeserializer
org.apache.kafka.common.serialization.DoubleDeserializer
org.apache.kafka.common.serialization.FloatDeserializer
org.apache.kafka.clients.consumer.ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG
andKEY_DESERIALIZER_CLASS_CONFIG
- Avro:
io.confluent.kafka.serializers.KafkaAvroDeserializer
(Confluent) or equivalent. - Protobuf:
io.confluent.kafka.serializers.protobuf.KafkaProtobufDeserializer
(Confluent) or equivalent.
- Schema Registry (Strongly Recommended): Use a schema registry (Confluent, Apicurio) with Avro or Protobuf. This enforces schema validation within the Kafka client library.
- Configuration: Configure your Kafka consumer to use the appropriate deserializer and, if applicable, the schema registry URL.
- Avoid Generic Deserializers: Never use
-
Threats Mitigated:
- Deserialization Vulnerabilities (Critical Severity): Prevents code execution via deserialization.
- Data Injection (High Severity): Prevents malicious data from entering the stream.
-
Impact:
- Deserialization Vulnerabilities: Risk reduced significantly (Critical to Low).
- Data Injection: Risk reduced significantly (High to Low).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]
Mitigation Strategy: Kafka Auditing (Using Kafka's Audit Log Capabilities)
-
Mitigation Strategy: Enable and configure Kafka's built-in auditing capabilities (if available in your distribution) to log security-relevant events.
-
Description:
- Check for Audit Log Support: Determine if your Kafka distribution includes built-in audit logging. Some distributions, like Confluent Platform, offer this feature.
- Configure Audit Log Appender: Configure an appender (e.g., a file appender or a Syslog appender) to receive audit log messages. This is typically done in the Kafka broker's
log4j.properties
file. - Configure Audit Log Filters: Define filters to specify which events should be logged. You might log all authentication attempts, authorization decisions, topic creation/deletion, etc.
- Centralized Logging: Configure the audit log appender to send logs to a centralized logging system (e.g., Splunk, ELK stack) for analysis and alerting.
- Regular Review: Regularly review audit logs for suspicious activity.
-
Threats Mitigated:
- Lack of Visibility (Medium Severity): Provides visibility into security-related events.
- Delayed Incident Response (Medium Severity): Enables faster detection and response to security incidents.
- Non-Repudiation (Low Severity): Provides an audit trail of actions performed on the Kafka cluster.
-
Impact:
- Lack of Visibility: Risk reduced significantly (Medium to Low).
- Delayed Incident Response: Risk reduced significantly (Medium to Low).
- Non-Repudiation: Risk reduced (Low to Very Low).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]
Mitigation Strategy: Secure Zookeeper Configuration (If Applicable)
-
Mitigation Strategy: Secure the Zookeeper ensemble used by Kafka, as it's critical for Kafka's operation.
-
Description:
- Authentication: Enable Zookeeper authentication using SASL (Kerberos or other mechanisms).
- ACLs: Configure Zookeeper ACLs to restrict access to Zookeeper nodes.
- Encryption: Encrypt communication between Kafka brokers and Zookeeper using TLS.
- Configuration:
- Set
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
in Kafka broker config. - Set
zookeeper.ssl.client.enable=true
in Kafka broker config. - Configure Zookeeper's
zoo.cfg
with appropriate security settings. - Use
zookeeper-security-migration
tool if upgrading.
- Set
- Network Isolation: Isolate Zookeeper on a separate network (best practice, but not strictly a Kafka configuration).
-
Threats Mitigated:
- Unauthorized Access to Zookeeper (Critical Severity): Prevents attackers from manipulating Kafka's metadata.
- Zookeeper Compromise (Critical Severity): Reduces the impact of a Zookeeper compromise.
-
Impact:
- Unauthorized Access to Zookeeper: Risk reduced significantly (Critical to Low).
- Zookeeper Compromise: Risk reduced significantly (Critical to Low/Medium).
-
Currently Implemented: [ Your Project Specific Implementation ]
-
Missing Implementation: [ Your Project Specific Missing Implementation ]