Skip to content

Commit 32a14b7

Browse files
committedNov 21, 2021
🎨 Refactored to use Red Hat AMQ Streams Operators
1 parent f1b2a3f commit 32a14b7

File tree

19 files changed

+189
-172
lines changed

19 files changed

+189
-172
lines changed
 
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Red Hat AMQ Streams Operator
2+
3+
To deploy the Red Hat AMQ Streams Operators we need to use an user with ```cluster-admin``` role.
4+
5+
```shell
6+
oc login -u admin-user
7+
```
8+
9+
Deploy the Red Hat AMQ Streams Operators with a Subscription from the Operator Hub:
10+
11+
```shell
12+
❯ oc apply -f amq-streams-migration-og.yml
13+
❯ oc apply -f amq-streams-migration-subscription.yml
14+
```
15+
16+
**NOTE**: This is a *namespaced* installation of the operator.
17+
18+
You could check the status of the subscription with the following commands:
19+
20+
```shell
21+
❯ oc get csv
22+
NAME DISPLAY VERSION REPLACES PHASE
23+
amqstreams.v1.6.3 Red Hat Integration - AMQ Streams 1.6.3 Succeeded
24+
```
25+
26+
[Adding Operators to a cluster](https://docs.openshift.com/container-platform/4.5/operators/olm-adding-operators-to-cluster.html) article
27+
describes deeply how to install Operators using OperatorHub

‎02-target-cluster/01-strimzi-operator/strimzi-migration-og.yml ‎01-source-cluster/01-amq-streams-operator/amq-streams-migration-og.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
apiVersion: operators.coreos.com/v1
33
kind: OperatorGroup
44
metadata:
5-
name: strimzi-migration-og
6-
namespace: strimzi-migration
5+
name: amq-streams-migration-og
6+
namespace: amq-streams-migration
77
spec:
88
targetNamespaces:
9-
- strimzi-migration
9+
- amq-streams-migration

‎02-target-cluster/01-strimzi-operator/strimzi-subscription.yml ‎01-source-cluster/01-amq-streams-operator/amq-streams-migration-subscription.yml

+5-5
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22
apiVersion: operators.coreos.com/v1alpha1
33
kind: Subscription
44
metadata:
5-
name: strimzi-kafka-operator
6-
namespace: strimzi-migration
5+
name: amq-streams
6+
namespace: amq-streams-migration
77
spec:
88
channel: stable
99
installPlanApproval: Automatic
10-
name: strimzi-kafka-operator
11-
source: community-operators
10+
name: amq-streams
11+
source: redhat-operators
1212
sourceNamespace: openshift-marketplace
13-
startingCSV: strimzi-cluster-operator.v0.26.0
13+
startingCSV: amqstreams.v1.6.3

‎01-source-cluster/01-strimzi-operator/README.md

-27
This file was deleted.

‎01-source-cluster/01-strimzi-operator/strimzi-subscription.yml

-13
This file was deleted.

‎01-source-cluster/02-kafka/README.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -39,8 +39,8 @@ We could check the status of this Apache Kafka cluster with:
3939

4040
```shell
4141
❯ oc get kafka
42-
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
43-
event-bus 4 3
42+
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS READY WARNINGS
43+
event-bus 3 3
4444
```
4545

4646
To describe the Kafka:
@@ -53,16 +53,16 @@ The following pods will be deployed:
5353

5454
```shell
5555
❯ oc get pod
56-
NAME READY STATUS RESTARTS AGE
57-
amq-streams-cluster-operator-v1.6.3-58bb6478b9-mh8g9 1/1 Running 0 163m
58-
event-bus-entity-operator-6dd8bd497c-lv9xn 3/3 Running 0 159m
59-
event-bus-kafka-0 1/1 Running 0 160m
60-
event-bus-kafka-1 1/1 Running 0 160m
61-
event-bus-kafka-2 1/1 Running 0 160m
62-
event-bus-kafka-exporter-8458898bf-2vspb 1/1 Running 0 158m
63-
event-bus-zookeeper-0 1/1 Running 0 162m
64-
event-bus-zookeeper-1 1/1 Running 0 162m
65-
event-bus-zookeeper-2 1/1 Running 0 162m
56+
NAME READY STATUS RESTARTS AGE
57+
event-bus-entity-operator-5b67db696c-msc9w 3/3 Running 0 54s
58+
event-bus-kafka-0 1/1 Running 0 2m36s
59+
event-bus-kafka-1 1/1 Running 0 2m36s
60+
event-bus-kafka-2 1/1 Running 0 2m36s
61+
event-bus-kafka-exporter-849bfcc8f5-cr89b 1/1 Running 0 13s
62+
event-bus-zookeeper-0 1/1 Running 0 4m34s
63+
event-bus-zookeeper-1 1/1 Running 0 4m34s
64+
event-bus-zookeeper-2 1/1 Running 0 4m34s
65+
amq-streams-cluster-operator-v1.6.3-58bb479-mh8g9 1/1 Running 0 6m27s
6666
```
6767

6868
References:

‎01-source-cluster/03-kafka-topics/README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ This command will show the status of the Kafka Topics:
1717

1818
```shell
1919
❯ oc get kt
20-
NAME CLUSTER PARTITIONS REPLICATION FACTOR
21-
apps.samples.greetings event-bus 3 3
22-
apps.samples.greetings.reversed event-bus 3 3
23-
monitor.ocp.logs event-bus 10 3
24-
monitor.ocp.metrics event-bus 10 3
20+
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
21+
apps.samples.greetings event-bus 3 3 True
22+
apps.samples.greetings.reversed event-bus 3 3 True
23+
monitor.ocp.logs event-bus 10 3 True
24+
monitor.ocp.metrics event-bus 10 3 True
2525
```
2626

2727
To describe a KafkaTopic:

‎01-source-cluster/04-kafka-users/README.md

+15-15
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ cluster. Definition [here](./users/admin-user-tls.yml).
2222
* **migration-user-tls**: Super-user (using TLS authentication) to migrate data of the Kafka
2323
cluster. Definition [here](./users/migration-user-tls.yml).
2424
* **sample-user-scram**: User (using scram-sha-512 authentication) to produce and consume records
25-
into ```apps.samples.greetings``` topic. Definition [here](./users/sample-user-scream.yml).
25+
into ```apps.samples.greetings``` topic. Definition [here](./users/sample-user-scram.yml).
2626
* **sample-user-tls**: User (using TLS authentication) to produce and consume records
2727
from ```apps.samples.greetings``` topic. Definition [here](./users/sample-user-tls.yml).
2828
* **sample-streams-user-tls**: User to produce and consume records to and from ```app.samples.greetings.*``` topics.
@@ -38,13 +38,13 @@ This command will show the status of the Kafka Users:
3838

3939
```shell
4040
❯ oc get kafkausers
41-
NAME CLUSTER AUTHENTICATION AUTHORIZATION
42-
admin-user-scram event-bus scram-sha-512
43-
admin-user-tls event-bus tls
44-
migration-user-tls event-bus tls
45-
sample-streams-user-tls event-bus tls simple
46-
sample-user-scram event-bus scram-sha-512 simple
47-
sample-user-tls event-bus tls simple
41+
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY
42+
admin-user-scram event-bus scram-sha-512 True
43+
admin-user-tls event-bus tls True
44+
migration-user-tls event-bus tls True
45+
sample-streams-user-tls event-bus tls simple True
46+
sample-user-scram event-bus scram-sha-512 simple True
47+
sample-user-tls event-bus tls simple True
4848
```
4949

5050
To describe a Kafka User:
@@ -78,32 +78,32 @@ To decrypt the password:
7878

7979
```shell
8080
❯ oc get secret sample-user-scram -o jsonpath='{.data.password}' | base64 -d
81-
PIPgj8f11S98
81+
JVDq4gwNjIeU
8282
```
8383

8484
These users could be tested with the following sample:
8585

8686
* Sample consumer authenticated with the ```sample-user-scram``` user:
8787

8888
```shell
89-
oc run kafka-consumer -n amq-streams-reg1-workshop -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/consumer.properties <<EOF
89+
oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/consumer.properties <<EOF
9090
security.protocol=SASL_PLAINTEXT
9191
sasl.mechanism=SCRAM-SHA-512
92-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=sample-user-scram password=PIPgj8f11S98;
92+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=sample-user-scram password=JVDq4gwNjIeU;
9393
EOF
94-
bin/kafka-console-consumer.sh --bootstrap-server event-bus-reg1-kafka-bootstrap:9092 --topic apps.samples.greetings --consumer.config=/tmp/consumer.properties --group sample-group
94+
bin/kafka-console-consumer.sh --bootstrap-server event-bus-kafka-bootstrap:9092 --topic apps.samples.greetings --consumer.config=/tmp/consumer.properties --group sample-group
9595
"
9696
```
9797

9898
* Sample producer authenticated with the ```sample-user-scram``` user:
9999

100100
```shell
101-
oc run kafka-producer -n amq-streams-reg1-workshop -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/producer.properties <<EOF
101+
oc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/producer.properties <<EOF
102102
security.protocol=SASL_PLAINTEXT
103103
sasl.mechanism=SCRAM-SHA-512
104-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=sample-user-scram password=PIPgj8f11S98;
104+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=sample-user-scram password=JVDq4gwNjIeU;
105105
EOF
106-
bin/kafka-console-producer.sh --broker-list event-bus-reg1-kafka-bootstrap:9092 --topic apps.samples.greetings --producer.config=/tmp/producer.properties
106+
bin/kafka-console-producer.sh --broker-list event-bus-kafka-bootstrap:9092 --topic apps.samples.greetings --producer.config=/tmp/producer.properties
107107
"
108108
```
109109

‎01-source-cluster/05-sample-apps/README.md

+35-24
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@ Sample command for producer testing in ```monitor.ocp.metrics``` topic:
1212

1313
```shell
1414
oc run kafka-producer-perf-test-metrics -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/producer.properties <<EOF
15-
bootstrap.servers=event-bus-reg1-kafka-bootstrap:9092
15+
bootstrap.servers=event-bus-kafka-bootstrap:9092
1616
security.protocol=SASL_PLAINTEXT
1717
sasl.mechanism=SCRAM-SHA-512
18-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=iJTYvtlqYamz;
18+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=YZlv9tXGjYhF;
1919
EOF
2020
bin/kafka-producer-perf-test.sh --topic monitor.ocp.metrics --num-records 1000000 --throughput 5000 --record-size 2048 --print-metrics --producer.config=/tmp/producer.properties
2121
"
@@ -25,10 +25,10 @@ Sample command for producer testing in ```monitor.ocp.logs``` topic:
2525

2626
```shell
2727
oc run kafka-producer-perf-test-logs -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/producer.properties <<EOF
28-
bootstrap.servers=event-bus-reg1-kafka-bootstrap:9092
28+
bootstrap.servers=event-bus-kafka-bootstrap:9092
2929
security.protocol=SASL_PLAINTEXT
3030
sasl.mechanism=SCRAM-SHA-512
31-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=iJTYvtlqYamz;
31+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=YZlv9tXGjYhF;
3232
EOF
3333
bin/kafka-producer-perf-test.sh --topic monitor.ocp.logs --num-records 1000000 --throughput 5000 --record-size 2048 --print-metrics --producer.config=/tmp/producer.properties
3434
"
@@ -51,9 +51,9 @@ Sample command for consumer test in ```monitor.ocp.metrics``` topic:
5151
oc run kafka-consumer-perf-test-metrics -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/consumer.properties <<EOF
5252
security.protocol=SASL_PLAINTEXT
5353
sasl.mechanism=SCRAM-SHA-512
54-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=LoDH5nqe3hRw;
54+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=YZlv9tXGjYhF;
5555
EOF
56-
bin/kafka-consumer-perf-test.sh --broker-list event-bus-reg1-kafka-bootstrap:9092 --topic monitor.ocp.metrics --consumer.config=/tmp/consumer.properties --group monitor-group --from-latest --messages 1000000 --reporting-interval 1000 --show-detailed-stats
56+
bin/kafka-consumer-perf-test.sh --broker-list event-bus-kafka-bootstrap:9092 --topic monitor.ocp.metrics --consumer.config=/tmp/consumer.properties --group monitor-group --from-latest --messages 1000000 --reporting-interval 1000 --show-detailed-stats
5757
"
5858
```
5959

@@ -63,9 +63,9 @@ Sample command for consumer test in ```monitor.ocp.logs``` topic:
6363
oc run kafka-consumer-perf-test-logs -ti --image=registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/consumer.properties <<EOF
6464
security.protocol=SASL_PLAINTEXT
6565
sasl.mechanism=SCRAM-SHA-512
66-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=LoDH5nqe3hRw;
66+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=YZlv9tXGjYhF;
6767
EOF
68-
bin/kafka-consumer-perf-test.sh --broker-list event-bus-reg1-kafka-bootstrap:9092 --topic monitor.ocp.logs --consumer.config=/tmp/consumer.properties --group monitor-group --from-latest --messages 1000000 --reporting-interval 1000 --show-detailed-stats
68+
bin/kafka-consumer-perf-test.sh --broker-list event-bus-kafka-bootstrap:9092 --topic monitor.ocp.logs --consumer.config=/tmp/consumer.properties --group monitor-group --from-latest --messages 1000000 --reporting-interval 1000 --show-detailed-stats
6969
"
7070
```
7171

@@ -92,12 +92,15 @@ This application generates `Hello World` messages into ```apps.samples.greetings
9292
oc apply -f 01-deployment-producer.yml
9393
```
9494

95-
A sample log of this application (oc logs -f hello-world-producer-777b876976-hh5cf):
95+
A sample log of this application:
9696

9797
```log
98-
2020-06-26 10:44:57 INFO KafkaProducerExample:35 - Sending messages "Hello world - 1040"
99-
2020-06-26 10:44:58 INFO KafkaProducerExample:35 - Sending messages "Hello world - 1041"
100-
2020-06-26 10:44:59 INFO KafkaProducerExample:35 - Sending messages "Hello world - 1042"
98+
❯ oc logs -f hello-world-producer-777b876976-hh5cf
99+
...
100+
2021-11-19 10:44:10 INFO KafkaProducerExample:69 - Sending messages "Hello world - 359"
101+
2021-11-19 10:44:10 INFO KafkaProducerExample:69 - Sending messages "Hello world - 360"
102+
2021-11-19 10:44:11 INFO KafkaProducerExample:69 - Sending messages "Hello world - 361"
103+
...
101104
```
102105

103106
### Streaming Application
@@ -114,9 +117,12 @@ oc apply -f 02-deployment-streams.yml
114117
A sample log of this application:
115118

116119
```log
120+
❯ oc logs -f hello-world-streams-788d49c5c-mfq42
121+
...
117122
21729 [sample-streams-group-d7cc0a0a-184e-489e-a23e-c33919e59341-StreamThread-1] WARN org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=sample-streams-group-d7cc0a0a-184e-489e-a23e-c33919e59341-StreamThread-1-consumer, groupId=sample-streams-group] Offset commit failed on partition apps.samples.greetings-0 at offset 102: This is not the correct coordinator.
118123
21729 [sample-streams-group-d7cc0a0a-184e-489e-a23e-c33919e59341-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=sample-streams-group-d7cc0a0a-184e-489e-a23e-c33919e59341-StreamThread-1-consumer, groupId=sample-streams-group] Group coordinator event-bus-kafka-1.event-bus-kafka-brokers.amq-streams-demo.svc:9093 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery
119124
21830 [sample-streams-group-d7cc0a0a-184e-489e-a23e-c33919e59341-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=sample-streams-group-d7cc0a0a-184e-489e-a23e-c33919e59341-StreamThread-1-consumer, groupId=sample-streams-group] Discovered group coordinator event-bus-kafka-0.event-bus-kafka-brokers.amq-streams-demo.svc:9093 (id: 2147483647 rack: null)
125+
...
120126
```
121127

122128
### Consumer Application
@@ -132,16 +138,21 @@ oc apply -f 03-deployment-consumer.yml
132138
A sample log of this application:
133139

134140
```log
135-
2020-06-26 10:54:45 INFO KafkaConsumerExample:43 - Received message:
136-
2020-06-26 10:54:45 INFO KafkaConsumerExample:44 - partition: 1
137-
2020-06-26 10:54:45 INFO KafkaConsumerExample:45 - offset: 597
138-
2020-06-26 10:54:45 INFO KafkaConsumerExample:46 - value: "549 - dlrow olleH"
139-
2020-06-26 10:54:46 INFO KafkaConsumerExample:43 - Received message:
140-
2020-06-26 10:54:46 INFO KafkaConsumerExample:44 - partition: 0
141-
2020-06-26 10:54:46 INFO KafkaConsumerExample:45 - offset: 913
142-
2020-06-26 10:54:46 INFO KafkaConsumerExample:46 - value: "649 - dlrow olleH"
143-
2020-06-26 10:54:46 INFO KafkaConsumerExample:43 - Received message:
144-
2020-06-26 10:54:46 INFO KafkaConsumerExample:44 - partition: 1
145-
2020-06-26 10:54:46 INFO KafkaConsumerExample:45 - offset: 598
146-
2020-06-26 10:54:46 INFO KafkaConsumerExample:46 - value: "749 - dlrow olleH"
141+
❯ oc logs -f hello-world-consumer-54bb9d7775-6dwbw
142+
...
143+
2021-11-19 10:43:40 INFO KafkaConsumerExample:47 - Received message:
144+
2021-11-19 10:43:40 INFO KafkaConsumerExample:48 - partition: 1
145+
2021-11-19 10:43:40 INFO KafkaConsumerExample:49 - offset: 65
146+
2021-11-19 10:43:40 INFO KafkaConsumerExample:50 - value: "003 - dlrow olleH"
147+
2021-11-19 10:43:40 INFO KafkaConsumerExample:52 - headers:
148+
2021-11-19 10:43:41 INFO KafkaConsumerExample:47 - Received message:
149+
2021-11-19 10:43:41 INFO KafkaConsumerExample:48 - partition: 2
150+
2021-11-19 10:43:41 INFO KafkaConsumerExample:49 - offset: 171
151+
2021-11-19 10:43:41 INFO KafkaConsumerExample:50 - value: "103 - dlrow olleH"
152+
2021-11-19 10:43:41 INFO KafkaConsumerExample:52 - headers:
153+
2021-11-19 10:43:41 INFO KafkaConsumerExample:47 - Received message:
154+
2021-11-19 10:43:41 INFO KafkaConsumerExample:48 - partition: 0
155+
2021-11-19 10:43:41 INFO KafkaConsumerExample:49 - offset: 70
156+
2021-11-19 10:43:41 INFO KafkaConsumerExample:50 - value: "203 - dlrow olleH"
157+
2021-11-19 10:43:41 INFO KafkaConsumerExample:52 - headers:
147158
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Red Hat AMQ Streams Operator
2+
3+
To deploy the Red Hat AMQ Streams Operators we need to use an user with ```cluster-admin``` role.
4+
5+
```shell
6+
oc login -u admin-user
7+
```
8+
9+
Deploy the Red Hat AMQ Streams Operators with a Subscription from the Operator Hub:
10+
11+
```shell
12+
❯ oc apply -f amq-streams-migration-og.yml
13+
❯ oc apply -f amq-streams-migration-subscription.yml
14+
```
15+
16+
**NOTE**: This is a *namespaced* installation of the operator.
17+
18+
You could check the status of the subscription with the following commands:
19+
20+
```shell
21+
❯ oc get csv
22+
NAME DISPLAY VERSION REPLACES PHASE
23+
amqstreams.v1.8.0 Red Hat Integration - AMQ Streams 1.8.0 amqstreams.v1.7.3 Succeeded
24+
```
25+
26+
[Adding Operators to a cluster](https://docs.openshift.com/container-platform/4.8/operators/admin/olm-adding-operators-to-cluster.html) article
27+
describes deeply how to install Operators using OperatorHub

‎01-source-cluster/01-strimzi-operator/strimzi-migration-og.yml ‎02-target-cluster/01-amq-streams-operator/amq-streams-migration-og.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
apiVersion: operators.coreos.com/v1
33
kind: OperatorGroup
44
metadata:
5-
name: strimzi-migration-og
6-
namespace: strimzi-migration
5+
name: amq-streams-migration-og
6+
namespace: amq-streams-migration
77
spec:
88
targetNamespaces:
9-
- strimzi-migration
9+
- amq-streams-migration
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
apiVersion: operators.coreos.com/v1alpha1
3+
kind: Subscription
4+
metadata:
5+
name: amq-streams
6+
namespace: amq-streams-migration
7+
spec:
8+
channel: stable
9+
installPlanApproval: Automatic
10+
name: amq-streams
11+
source: redhat-operators
12+
sourceNamespace: openshift-marketplace
13+
startingCSV: amqstreams.v1.8.0

‎02-target-cluster/01-strimzi-operator/README.md

-27
This file was deleted.

‎02-target-cluster/02-kafka/README.md

+16-16
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,11 @@ This cluster is available with the following services:
1919

2020
```shell
2121
$ oc get svc
22-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
23-
event-bus-kafka-bootstrap ClusterIP 172.30.220.206 <none> 9091/TCP,9092/TCP,9093/TCP 85m
24-
event-bus-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 85m
25-
event-bus-zookeeper-client ClusterIP 172.30.116.168 <none> 2181/TCP 87m
26-
event-bus-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 87m
22+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
23+
event-bus-kafka-bootstrap ClusterIP 172.30.20.58 <none> 9091/TCP,9092/TCP,9093/TCP 84s
24+
event-bus-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 84s
25+
event-bus-zookeeper-client ClusterIP 172.30.119.58 <none> 2181/TCP 3m5s
26+
event-bus-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 3m5s
2727
```
2828

2929
The ```event-bus-kafka-bootstrap``` service is used to connect the producers and consumers with this cluster.
@@ -50,18 +50,18 @@ The following pods will be deployed:
5050

5151
```shell
5252
❯ oc get pod
53-
NAME READY STATUS RESTARTS AGE
54-
amq-streams-cluster-operator-v1.8.0-66df9f665f-vlvhh 1/1 Running 0 89m
55-
event-bus-entity-operator-845cb76bdf-qh95r 3/3 Running 0 84m
56-
event-bus-kafka-0 1/1 Running 0 86m
57-
event-bus-kafka-1 1/1 Running 0 86m
58-
event-bus-kafka-2 1/1 Running 0 86m
59-
event-bus-kafka-exporter-5d75b58fc4-v6cnp 1/1 Running 0 83m
60-
event-bus-zookeeper-0 1/1 Running 0 88m
61-
event-bus-zookeeper-1 1/1 Running 0 88m
62-
event-bus-zookeeper-2 1/1 Running 0 88m
53+
NAME READY STATUS RESTARTS AGE
54+
event-bus-entity-operator-5fb8465fd5-s4zs2 3/3 Running 0 55s
55+
event-bus-kafka-0 1/1 Running 0 2m30s
56+
event-bus-kafka-1 1/1 Running 0 2m30s
57+
event-bus-kafka-2 1/1 Running 0 2m30s
58+
event-bus-kafka-exporter-6f7ffb5b8b-dzslg 1/1 Running 0 14s
59+
event-bus-zookeeper-0 1/1 Running 0 4m11s
60+
event-bus-zookeeper-1 1/1 Running 0 4m11s
61+
event-bus-zookeeper-2 1/1 Running 0 4m11s
62+
amq-streams-cluster-operator-v1.8.0-66df65f-vlvhh 1/1 Running 0 6m7s
6363
```
6464

6565
References:
6666

67-
* [Kafka Cluster Configuration](https://access.redhat.com/documentation/en-us/red_hat_amq/2021.q3/html-single/using_amq_streams_on_openshift/index#assembly-config-kafka-str)
67+
* [Kafka Cluster Configuration](ttps://access.redhat.com/documentation/en-us/red_hat_amq/2021.q3/html-single/using_amq_streams_on_openshift/index#assembly-config-kafka-str)

‎02-target-cluster/03-kafka-users/README.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,13 @@ sample-user-tls event-bus tls simple True
3535
To describe a Kafka User:
3636

3737
```shell
38-
oc get kafkauser sample-user-scram -o yaml
38+
oc get kafkauser admin-user-scram -o yaml
3939
```
4040

4141
Each user will have its own secret with the credentials defined it:
4242

4343
```shell
44-
❯ oc get secret sample-user-scram -o yaml
44+
❯ oc get secret admin-user-scram -o yaml
4545
apiVersion: v1
4646
data:
4747
password: ZHYwV1V5eUx6Y09x
@@ -62,33 +62,33 @@ type: Opaque
6262
To decrypt the password:
6363

6464
```shell
65-
❯ oc get secret sample-user-scram -o jsonpath='{.data.password}' | base64 -d
66-
PIPgj8f11S98
65+
❯ oc get secret admin-user-scram -o jsonpath='{.data.password}' | base64 -d
66+
N7FSt6poV2GF
6767
```
6868

6969
These users could be tested with the following sample:
7070

7171
* Sample consumer authenticated with the ```sample-user-scram``` user:
7272

7373
```shell
74-
oc run kafka-consumer -n amq-streams-reg1-workshop -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel7:1.8.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/consumer.properties <<EOF
74+
oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel7:1.8.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/consumer.properties <<EOF
7575
security.protocol=SASL_PLAINTEXT
7676
sasl.mechanism=SCRAM-SHA-512
77-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=sample-user-scram password=PIPgj8f11S98;
77+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=N7FSt6poV2GF;
7878
EOF
79-
bin/kafka-console-consumer.sh --bootstrap-server event-bus-reg1-kafka-bootstrap:9092 --topic apps.samples.greetings --consumer.config=/tmp/consumer.properties --group sample-group
79+
bin/kafka-console-consumer.sh --bootstrap-server event-bus-kafka-bootstrap:9092 --topic apps.samples.greetings --consumer.config=/tmp/consumer.properties --group sample-group
8080
"
8181
```
8282

83-
* Sample producer authenticated with the ```sample-user-scram``` user:
83+
* Sample producer authenticated with the ```admin-user-scram``` user:
8484

8585
```shell
86-
oc run kafka-producer -n amq-streams-reg1-workshop -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel7:1.8.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/producer.properties <<EOF
86+
oc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel7:1.8.0 --rm=true --restart=Never -- /bin/bash -c "cat >/tmp/producer.properties <<EOF
8787
security.protocol=SASL_PLAINTEXT
8888
sasl.mechanism=SCRAM-SHA-512
89-
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=sample-user-scram password=PIPgj8f11S98;
89+
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=admin-user-scram password=N7FSt6poV2GF;
9090
EOF
91-
bin/kafka-console-producer.sh --broker-list event-bus-reg1-kafka-bootstrap:9092 --topic apps.samples.greetings --producer.config=/tmp/producer.properties
91+
bin/kafka-console-producer.sh --broker-list event-bus-kafka-bootstrap:9092 --topic apps.samples.greetings --producer.config=/tmp/producer.properties
9292
"
9393
```
9494

‎02-target-cluster/04-kafka-mirror-maker2/README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,8 @@ clean the secret to be created in the target cluster.
1919
oc get secret migration-user-tls -o yaml > source-secrets/migration-user-tls.yaml
2020
```
2121

22-
Remove the data not needed and clean the secret to be created in the target cluster.
22+
Remove the data not needed and clean the secret to be created in the target cluster, and rename the
23+
source secret name to `event-bus-source-cluster-ca-cert`:
2324

2425
```shell
2526
oc apply -f ./source-secrets/

‎02-target-cluster/04-kafka-mirror-maker2/event-bus-mirror-maker2.yml

+1-2
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@ spec:
1111
clusters:
1212
############### SOURCE CLUSTER (Active) ################
1313
- alias: "my-source-cluster"
14-
#bootstrapServers: event-bus-reg1-kafka-bootstrap.amq-streams-reg1-workshop.svc:9093
15-
bootstrapServers: event-bus-kafka-bootstrap-amq-streams-migration.apps.cluster-cfc1.cfc1.example.opentlc.com:443
14+
bootstrapServers: event-bus-kafka-bootstrap-amq-streams-migration.apps.<OCP_HOST>:443
1615
tls:
1716
trustedCertificates:
1817
- secretName: event-bus-source-cluster-ca-cert

‎02-target-cluster/05-sample-apps/README.md

+2
Original file line numberDiff line numberDiff line change
@@ -39,3 +39,5 @@ oc apply -f ./consumer-apps/
3939

4040
Now you could check that the new consumers in the target cluster are consuming the data starting from the latest
4141
offset processed in the source cluster.
42+
43+
Now, the last step is move your producer apps to this new Kafka cluster.

‎README.md

+12-8
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,10 @@
33
This repo includes a set of resources to demonstrate how Apache MirrorMaker2 could
44
help in a migration from a source Apache Kafka cluster to another Apache Kafka cluster :rocket:.
55

6+
**NOTE**: This branch is focused in Red Hat AMQ Streams Operators.
7+
8+
For more context, please review our [blog post](https://blog.jromanmartin.io/2021/11/19/migrating-kafka-with-mirror-maker2.html).
9+
610
The scenario covered in this repo is to have an Active-Passive deployment of Apache Kafka clusters
711
deployed in different OpenShift clusters.
812

@@ -13,14 +17,14 @@ This repo was tested :sparkles: in:
1317
* Red Hat AMQ Streams Operators 1.6.3 (Apache Kafka 2.5)
1418

1519
* Target Environment:
16-
* Red Hat OpenShift Container Platform 4.8.5
20+
* Red Hat OpenShift Container Platform 4.9.0
1721
* Red Hat AMQ Streams Operators 1.8.0 (Apache Kafka 2.8)
1822

1923
**NOTE**: To follow this demo you should have two different OpenShift clusters available
2024
following the versions described above.
2125

2226
:rotating_light: **WARN**: This repo is not defined to be a production-ready implementation but it could be used
23-
as a base line to design and develop your specific use case. Use
27+
as a base line to design and develop your specific use case. Use carefully and by own your responsibility.
2428

2529
## Migration Process Overview
2630

@@ -57,8 +61,7 @@ will be the new active one and we could stop and remove the source platform.
5761
Apache Kafka clusters (multi-cloud, different data-flows, back-ups, ...), all of them out of the scope
5862
of this repo.
5963

60-
Both Apache Kafka deployments are managed and operated by the Red Hat AMQ Streams operators. Of course,
61-
this scenario could be covered also by the Strimzi Operators (upstream of Red Hat AMQ Streams).
64+
Both Apache Kafka deployments are managed and operated by the Red Hat AMQ Streams operators.
6265

6366
# Deploying Source Environment
6467

@@ -69,11 +72,11 @@ As a normal user (non ```cluster-admin```) in your source OpenShift cluster, cre
6972
❯ oc new-project amq-streams-migration
7073
```
7174

72-
### Deploying Red Hat AMQ Streams Operators
75+
### Deploying Strimzi Operators
7376

7477
Follow [the instructions](./01-source-cluster/01-amq-streams-operator/README.md)
7578

76-
### Deploying Red Hat AMQ Streams
79+
### Deploying Apache Kafka
7780

7881
Follow [the instructions](./01-source-cluster/02-kafka/README.md)
7982

@@ -101,11 +104,11 @@ As a normal user (non ```cluster-admin```) in your OpenShift cluster, create the
101104
❯ oc new-project amq-streams-migration
102105
```
103106

104-
### Deploying Red Hat AMQ Streams Operators
107+
### Deploying Strimzi Operators
105108

106109
Follow [the instructions](./02-target-cluster/01-amq-streams-operator/README.md)
107110

108-
### Deploying Red Hat AMQ Streams
111+
### Deploying Apache Kafka
109112

110113
Follow [the instructions](./02-target-cluster/02-kafka/README.md)
111114

@@ -142,6 +145,7 @@ my-source-cluster.checkpoints.internal event-bus 1 3
142145

143146
## References
144147

148+
* [Migrating Kafka clusters with MirrorMaker2 and Strimzi](https://blog.jromanmartin.io/2021/11/19/migrating-kafka-with-mirror-maker2.html)
145149
* [Red Hat AMQ Product Documentation](https://access.redhat.com/documentation/en-us/red_hat_amq/2021.q3/)
146150
* [AMQ Streams on OpenShift Overview](https://access.redhat.com/documentation/en-us/red_hat_amq/2021.q3/html-single/amq_streams_on_openshift_overview/index)
147151
* [Using AMQ Streams on OCP](https://access.redhat.com/documentation/en-us/red_hat_amq/2021.q3/html-single/using_amq_streams_on_openshift/index)

0 commit comments

Comments
 (0)
Please sign in to comment.