Skip to content

Commit

Permalink
Merge pull request #763 from LavredisG/main
Browse files Browse the repository at this point in the history
adds karmadactl flags for proper display
  • Loading branch information
karmada-bot authored Dec 16, 2024
2 parents c824534 + c788a71 commit ce1468d
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 16 deletions.
18 changes: 9 additions & 9 deletions docs/tutorials/autoscaling-with-custom-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ If you use the `hack/local-up-karmada.sh` script to deploy Karmada, `karmada-met

## Deploy workload in `member1` and `member2` cluster

You need to deploy a sample deployment(1 replica) and service in `member1` and `member2`.
You need to deploy a sample deployment (1 replica) and service in `member1` and `member2`.

```yaml
apiVersion: apps/v1
Expand Down Expand Up @@ -190,12 +190,12 @@ spec:
weight: 1
```

After deploying, you can check the distribution of the pods and service:
After deploying, you can check the distribution of the Pods and Service:
```sh
$ karmadactl get pods
$ karmadactl get pods --operation-scope members
NAME CLUSTER READY STATUS RESTARTS AGE
sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 111s
$ karmadactl get svc
$ karmadactl get svc --operation-scope members
NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
sample-app member1 ClusterIP 10.11.29.250 <none> 80/TCP 3m53s Y
```
Expand Down Expand Up @@ -409,7 +409,7 @@ As mentioned before, you need a multi-cluster service to route the requests to t

After deploying, you can check the multi-cluster service:
```sh
$ karmadactl get svc
$ karmadactl get svc --operation-scope members
NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
derived-sample-app member1 ClusterIP 10.11.59.213 <none> 80/TCP 9h Y
```
Expand All @@ -428,14 +428,14 @@ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

* Check the pod distribution firstly.
```sh
$ karmadactl get pods
$ karmadactl get pods --operation-scope members
NAME CLUSTER READY STATUS RESTARTS AGE
sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 111s
```

* Check multi-cluster service ip.
```sh
$ karmadactl get svc
$ karmadactl get svc --operation-scope members
NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
derived-sample-app member1 ClusterIP 10.11.59.213 <none> 80/TCP 20m Y
```
Expand All @@ -447,7 +447,7 @@ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

* Wait 15s, the replicas will be scaled up, then you can check the pod distribution again.
```sh
$ karmadactl get po -l app=sample-app
$ karmadactl get pods --operation-scope members -l app=sample-app
NAME CLUSTER READY STATUS RESTARTS AGE
sample-app-9b7d8c9f5-454vz member2 1/1 Running 0 84s
sample-app-9b7d8c9f5-7fjhn member2 1/1 Running 0 69s
Expand All @@ -465,7 +465,7 @@ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

After 1 minute, the load testing tool will be stopped, then you can see the workload is scaled down across clusters.
```sh
$ karmadactl get pods -l app=sample-app
$ karmadactl get pods --operation-scope members -l app=sample-app
NAME CLUSTER READY STATUS RESTARTS AGE
sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 91m
```
14 changes: 7 additions & 7 deletions docs/tutorials/autoscaling-with-resource-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,10 +169,10 @@ spec:

After deploying, you can check the propagation of the Pods and Service:
```sh
$ karmadactl get pods
$ karmadactl get pods --operation-scope members
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 9h
$ karmadactl get svc
$ karmadactl get svc --operation-scope members
NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
nginx-service member1 ClusterIP 10.11.216.215 <none> 80/TCP 9h Y
nginx-service member2 ClusterIP 10.13.46.61 <none> 80/TCP 9h Y
Expand Down Expand Up @@ -272,7 +272,7 @@ As mentioned before, we need a multi-cluster Service to route the requests to th

After deploying, you can check the multi-cluster Service:
```sh
$ karmadactl get svc
$ karmadactl get svc --operation-scope members
NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
derived-nginx-service member1 ClusterIP 10.11.59.213 <none> 80/TCP 9h Y
```
Expand All @@ -291,14 +291,14 @@ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

* Check the Pod propagation firstly.
```sh
$ karmadactl get pods
$ karmadactl get pods --operation-scope members
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 61m
```

* Check multi-cluster Service IP.
```sh
$ karmadactl get svc
$ karmadactl get svc --operation-scope members
NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
derived-nginx-service member1 ClusterIP 10.11.59.213 <none> 80/TCP 20m Y
```
Expand All @@ -310,7 +310,7 @@ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

* Wait 15s, the replicas will be scaled up, then you can check the Pod propagation again.
```sh
$ karmadactl get pods -l app=nginx
$ karmadactl get pods --operation-scope members -l app=nginx
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-777bc7b6d7-c2cfv member1 1/1 Running 0 22s
nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 62m
Expand All @@ -329,7 +329,7 @@ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

After 1 minute, the load testing tool will be stopped, then you can see the workload is scaled down across clusters.
```sh
$ karmadactl get pods -l app=nginx
$ karmadactl get pods --operation-scope members -l app=nginx
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 64m
```

0 comments on commit ce1468d

Please sign in to comment.