Skip to content

Commit 192371e

Browse files
committed
New Version of Kubernetes WS
1 parent 567a588 commit 192371e

32 files changed

+564
-573
lines changed
File renamed without changes.
File renamed without changes.

rolling-update/deployment.yaml 02-rolling-update/deployment.yaml.template

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ spec:
2626
spec:
2727
containers:
2828
- name: hello-node
29-
image: gcr.io/<PROJECT_ID>/imageflipper:1.0
29+
image: gcr.io/<PROJECT_ID>/imageflipper-app:<VERSION>
3030
imagePullPolicy: Always
3131
ports:
3232
- containerPort: 8080

blue-green/deployment.yaml 03-blue-green/deployment.yaml.template

+2-1
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
14+
1415
apiVersion: extensions/v1beta1
1516
kind: Deployment
1617
metadata:
@@ -26,7 +27,7 @@ spec:
2627
spec:
2728
containers:
2829
- name: hello-node
29-
image: gcr.io/<PROJECT_ID>/imageflipper:2.0
30+
image: gcr.io/<PROJECT_ID>/imageflipper-app:<VERSION>
3031
imagePullPolicy: Always
3132
ports:
3233
- containerPort: 8080

blue-green/service.yaml 03-blue-green/service.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
apiVersion: v1
1515
kind: Service
1616
metadata:
17-
name: hello-node
17+
name: hello-node-test
1818
spec:
1919
ports:
2020
- port: 80

CONTRIBUTING.md

+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# How to Contribute
2+
3+
We'd love to accept your patches and contributions to this project. There are
4+
just a few small guidelines you need to follow.
5+
6+
## Contributor License Agreement
7+
8+
Contributions to this project must be accompanied by a Contributor License
9+
Agreement. You (or your employer) retain the copyright to your contribution,
10+
this simply gives us permission to use and redistribute your contributions as
11+
part of the project. Head over to <https://cla.developers.google.com/> to see
12+
your current agreements on file or to sign a new one.
13+
14+
You generally only need to submit a CLA once, so if you've already submitted one
15+
(even if it was for a different project), you probably don't need to do it
16+
again.
17+
18+
## Code reviews
19+
20+
All submissions, including submissions by project members, require review. We
21+
use GitHub pull requests for this purpose. Consult
22+
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
23+
information on using pull requests.

CONTRIBUTORS

+2-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
Sandeep Dinesh (@thesandlord)
1+
Sandeep Dinesh (@thesandlord)
2+
Robert Kubis (@hostirosti)

README.md

+123-47
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This tutorial launches a Kubernetes cluster on [Google Container Engine](https:/
88

99
If you are running this tutorial at home, you will need a Google Cloud Platform account. If you don't have one, sign up for the [free trial](https://cloud.google.com/free).
1010

11-
To complete this tutorial, you will need to following tools installed:
11+
To complete this tutorial, you will need the following tools installed:
1212

1313
- [Kubernetes CLI](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#client-binaries)
1414
- [gcloud SDK](https://cloud.google.com/sdk)
@@ -21,123 +21,199 @@ You can also use [Google Cloud Shell](https://cloud.google.com/shell), a free VM
2121

2222
1. Create a cluster:
2323

24+
One great feature of Cloud Shell is that you get a machine assigned from the geographically closest pool.
25+
We can use that to create our Kubernetes cluster in the same zone by reading the Cloud Shell instance zone
26+
from the Google Compute Engine Metadata:
2427
```
2528
ZONE=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" \
2629
-H "Metadata-Flavor: Google" | sed 's:.*/::')
2730
```
2831

29-
`gcloud container clusters create my-cluster --zone=$ZONE`
32+
`gcloud container clusters create playground --nodes 3 --zone=$ZONE`
3033

3134
If you get an error, make sure you enable the Container Engine API [here](https://console.cloud.google.com/apis/api/container.googleapis.com/overview).
3235

33-
2. Run the hello world [deployment](./hello-node/deployment.yaml):
36+
*Tip:* To enable kubectl autocomplete run `source <(kubectl completion bash)`
3437

35-
`kubectl apply -f ./hello-node/deployment.yaml`
38+
2. Run the hello world [deployment](./01-hello-node/deployment.yaml):
3639

37-
Expose the container with a [service](./hello-node/service.yaml):
40+
`kubectl apply -f ./01-hello-node/deployment.yaml`
3841

39-
`kubectl apply -f ./hello-node/service.yaml`
42+
Expose the container with a [service](./01-hello-node/service.yaml):
4043

41-
At this stage, you have created a Deployment with one Pod, and a Service with an extrnal load balancer that will send traffic to that pod.
44+
`kubectl apply -f ./01-hello-node/service.yaml`
4245

43-
You can see the extrnal IP address for the service with this command. It might take a few minutes to get the extrnal IP address:
46+
At this stage, you have created a Deployment with one Pod, and a Service with an external load balancer that will send traffic to that pod.
47+
48+
You can see the external IP address for the service with the following command. It might take a few minutes to get the external IP address:
4449

4550
`kubectl get svc`
4651

47-
## Step 2: Scale up deployment
52+
*Tip:* You can use watch to monitor the status of your services: `watch -n1 kubectl get svc`
53+
54+
Once you see the external ip address you can navigate to the Hello World app in your browser.
55+
56+
## Step 2: Scale the Hello World App
4857

49-
One pod is not enough. Let's get 5 of them!
58+
You can easily scale out and in your applications.
59+
One pod of our app is not enough. Let's get 5 of them!
5060

5161
`kubectl scale deployment hello-node-green --replicas=5`
5262

53-
You can see the all pods with this command:
63+
You can see the all pods with the following command:
5464

5565
`kubectl get pods`
5666

57-
## Step 3: Hello world is boring, let's update the app
67+
## Step 3: Hello World is boring, let's update the app
68+
69+
The new app allows you to upload a picture, flips it around, and displays it.
70+
71+
You can see the source code [here](./imageflipper-app/index.js).
72+
73+
Lets package the new app into a container so we can run it on Kubernetes.
74+
75+
The specification to build our container image can be found in the [Dockerfile](./imageflipper-app/Dockerfile).
76+
77+
There are 2 options to build our container, locally or with the [Google Container Builder](https://cloud.google.com/container-builder). Chose one of the two.
78+
79+
### Option 1 - Locally
80+
81+
To make the build a bit easier we use a [Makefile][./imageflipper-app/Makefile]. It will build the container image, tag it with the current version (obtained with `git describe --always`) and optionally push it to the [Google Container Registry](https://gcr.io).
5882

59-
The new app will take a picture, flip it around, and return it.
83+
To build the container run:
6084

61-
You can see the source code [here](./rolling-update/index.js).
85+
`cd imageflipper-app && make container && cd ..`
6286

63-
The Dockerfile for this container can be found here.
87+
To push the container image to the Container registry run:
6488

65-
Build the Docker Container using [Google Container Builder](https://cloud.google.com/container-builder):
89+
`cd imageflipper-app && make push-gcr && cd ..`
6690

67-
`gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/imageflipper:1.0 ./rolling-update/`
91+
### Option 2 - Google Container Builder
6892

69-
This will automatically build and push this Docker image to [Google Container Registry](https://gcr.io).
93+
[Google Container Builder](https://cloud.google.com/container-builder) will build your containers for you remotely on GCP. To submit a build run the following command:
7094

71-
Now, we are going to update the deployment created in the first step. You can see the new YAML file [here](/rolling-update/deployment.yaml).
95+
``VERSION=`git describe --always`; gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/imageflipper-app:$VERSION ./imageflipper-app/``
7296

73-
Replace the <PROJECT_ID> placeholder with your Project ID. Use this command to do it automatically:
97+
This will automatically build and push the Docker image to your projects [Google Container Registry](https://gcr.io).
7498

75-
`sed -i "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" ./rolling-update/deployment.yaml`
99+
Now, we are going to update the deployment created in [Step 1](#step-1:-create-cluster-and-deploy-hello-world). You can see the new YAML file [here](/02-rolling-update/deployment.yaml.template).
100+
101+
Make a copy of the template file [deployment.yaml.template](./02-rolling-update/deployment.yaml.template) and save it in the same folder as `deployment.yaml`. Replace the <PROJECT_ID> and <VERSION> placeholders with your Project ID and the current git version(`git describe --always`).
102+
You can use the following command to do all this in one step:
103+
104+
``VERSION=`git describe --always`; sed -e "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" -e "s~<VERSION>~$VERSION~g" ./02-rolling-update/deployment.yaml.template > ./02-rolling-update/deployment.yaml``
76105

77106
Now use the apply command to update the deployment. The only change to this file from the first deployment.yaml is the new container image.
78107

79-
`kubectl apply -f ./rolling-update/deployment.yaml`
108+
`kubectl apply -f ./02-rolling-update/deployment.yaml`
80109

81110
This will replace all the old containers with the new ones. Kubernetes will perform a rolling update; it will delete one old container at a time and replace it with a new one.
82111

83112
You can watch the containers being updated with this command:
84113

85-
`watch kubectl get pods`
114+
`watch -n1 kubectl get pods`
86115

87116
Once it is done, press `ctrl + c` to quit.
88117

89-
If you visit the website now, you can see the updated website!
118+
If you refresh the page pointing to the external ip of your service now, you'll see the updated website!
90119

91-
## Step 4: Backend Service
120+
## Step 4: Splitting the app into Microservices
92121

93-
The web frontend is created, but let's split the monolith into microservices. The backend service will do the image manipulation and will expose a REST API that the frontend service will communicate with.
122+
The imageflipper app is created and running, but what if you want to innovate on the image flipping independent of the frontend? We need to separate these two parts. Lets create a backend service that does the image manipulation and will expose a REST API that the frontend app can communicate with.
94123

95-
You can see the source code for the service [here](./second-service/index.js).
124+
You can see the source code for the backend service [here](./imageflipper-service/index.js).
96125

97-
Build the Docker Container using [Google Container Builder](https://cloud.google.com/container-builder):
126+
You again have the 2 options from [Step 3](#step-3:-hello-world-is-boring,-let's-update-the-app) to build your container.
98127

99-
`gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/annotate:1.0 ./second-service/`
128+
`cd imageflipper-service && make push-gcr && cd ..`
100129

101-
The service.yaml file for the backend service is very similar to the frontend service, but it does not specify `type: LoadBalancer`. This will prevent Kubernetes from spinning up a Cloud Load Balancer, and instead the service will only be accessable from inside the cluster.
130+
Run the backend [deployment](./imageflipper-service/deployment.yaml):
102131

103-
Run the backend [deployment](./second-service/deployment.yaml):
132+
``VERSION=`git describe --always`; sed -e "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" -e "s~<VERSION>~$VERSION~g" ./imageflipper-service/deployment.yaml.template > ./imageflipper-service/deployment.yaml``
104133

105-
`sed -i "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" ./second-service/deployment.yaml`
134+
`kubectl apply -f ./imageflipper-service/deployment.yaml`
106135

107-
`kubectl apply -f ./second-service/deployment.yaml`
136+
The service.yaml file for the backend service is very similar to the frontend service, but it does not specify `type: LoadBalancer`. This will make the service a cluster local service instead that is only accessible from inside the cluster.
108137

109-
Expose the container with a [service](./second-service/service.yaml):
138+
Make the backend pods discoverable and addressable with a [service](./imageflipper-service/service.yaml):
110139

111-
`kubectl apply -f ./second-service/service.yaml`
140+
`kubectl apply -f ./imageflipper-service/service.yaml`
112141

113142
## Step 5: Update Frontend Service to use the Backend with a Blue-Green deployment
114143

115-
Now the backend service is running, you need to update the frontend to use the new backend.
144+
Now that the backend service is running, we need to update the frontend to use the new backend.
116145

117-
The new code is [here](./blue-green/index.js).
146+
Make the changes in [imageflipper-app/index.js](./imageflipper-app/index.js) and update the `/api/photo` endpoint to use the new backend. Don't cheat, but if you need "inspiration" you can find the solution [here](./imageflipper-app/index.js.v2).
118147

119-
Instead of doing a rolling update like we did before, we are going to use a Blue-Green strategy.
148+
To create a new git version we need to commit our changes. Run the following command to commit the changes and tag the new version.
120149

121-
This means we will spin up a new deployment of the frontend, wait until all containers are created, then configure the service to send traffic to the new deployment, and finally spin down the old deployment. This allows us to make sure that users don't get different versions of the app, smoke test the new deployment at scale, and a few other benefits. You can read more about [Blue-Green Deployments vs Rolling Updates here](http://stackoverflow.com/questions/23746038/canary-release-strategy-vs-blue-green).
150+
First we need to set git user email and name:
151+
```
152+
git config --global user.email "[email protected]"
153+
git config --global user.name "Rockstar Developer"
154+
```
122155

123-
Build the Docker Container using [Google Container Builder](https://cloud.google.com/container-builder):
156+
`git add -u && git commit -m "new awesome backend" && git tag -a v2.0 -m "Version 2.0"`
124157

125-
`gcloud container builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/imageflipper:2.0 ./blue-green/`
158+
Now we can build our new frontend container image:
159+
160+
`cd imageflipper-app && make push-gcr && cd ..`
161+
162+
Instead of doing a rolling update like we did before, we are going to use a Blue-Green strategy this time.
163+
164+
This means we will spin up a new deployment of the frontend, wait until all containers are created, then configure the service to send traffic to the new deployment, and finally spin down the old deployment. This allows us to make sure that users don't get different versions of the app, smoke test the new deployment at scale, and a few other benefits. You can read more about [Blue-Green Deployments vs Rolling Updates here](http://stackoverflow.com/questions/23746038/canary-release-strategy-vs-blue-green).
126165

127166
Spin up the the new deployment with the following command:
128167

129-
`sed -i "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" ./blue-green/deployment.yaml`
168+
``VERSION=`git describe --always`; sed -e "s~<PROJECT_ID>~$DEVSHELL_PROJECT_ID~g" -e "s~<VERSION>~$VERSION~g" ./03-blue-green/deployment.yaml.template > ./03-blue-green/deployment.yaml``
130169

131-
`kubectl apply -f ./blue-green/deployment.yaml`
170+
`kubectl apply -f ./03-blue-green/deployment.yaml`
132171

133-
You can see all the containers running with this command:
172+
Check if the new version is running with the following command:
134173

135174
`kubectl get pods`
136175

137-
Now, we need to edit the service to point to this new deployment. The new service definition is [here](./blue-green/service.yaml). Notice the only thing we changed is the selector.
176+
To test the service we need to make the new version accessible through a Kubernetes service.
138177

139-
`kubectl apply -f ./blue-green/service.yaml`
178+
Create the `hello-node-test` service with the following command:
140179

141-
At this point, you can visit the website and the new code will be live. Once you are happy with the results, you can turn down the green deployment.
180+
`kubectl apply -f ./03-blue-green/service.yaml`
181+
182+
Wait for the external ip to be assigned:
183+
184+
`watch -n1 kubectl get svc`
185+
186+
Once the external ip is assigned for the `hello-node-test` service direct your browser in a new tab to it. Verify if everything works. Once you're happy, we can switch over the existing (production) `hello-node` service to the new frontend.
187+
188+
To do this, we use the `kubectl edit` command:
189+
190+
`kubectl edit svc hello-node`
191+
192+
Look for the selector and change it to `hello-node-green`:
193+
194+
Before:
195+
196+
```
197+
...
198+
selector:
199+
name: hello-node-green
200+
...
201+
```
202+
203+
After:
204+
205+
```
206+
...
207+
selector:
208+
name: hello-node-blue
209+
...
210+
```
211+
212+
213+
At this point, you can go back to the original browser tab were you had the first version open and verify the new code will be live. Once you are happy with the results, you can turn down the green deployment.
142214

143215
`kubectl scale deployment hello-node-green --replicas=0`
216+
217+
or
218+
219+
`kubectl delete deployment hello-node-green`

blue-green/index.html

-26
This file was deleted.

0 commit comments

Comments
 (0)