/
5. Bringing it all Together

5. Bringing it all Together

We are now going to apply everything you have learned so far and deploy two services that can communicate with each other.

Think about your Deployment

We are going to have two echoserver PODs. We will identify them as service_id: local-echo-server-from-yml and service_id: remote-echo-server-from-yml. The names will help us clearly identify each one.

Think about how you are going to expose your Deployments as Services

We want both PODs to communicate with each other but with constraints.

We want the local instance exposed to the outside world, and it needs to be able to communicate with the remote instance.

We want the remote instance to be exposed to the cluster but not to the outside world.

To achieve all of this we will architect the deployment as shown here

Remember exposing a service as NodePort makes it visible to all other PODs in the cluster but not to the outside world

Understanding the mappings

Go to the minikube dashboard

Write down the Cluster IP addresses as shown above (yours will be different).

Select the yml-local-echoserver-ep

In the details page, select the POD

In the PODs page, select the exec option (upper right) to open a shell into the POD

From here type curl <NodePort IP>:8080 so from the example above it will be

curl 10.106.25.240:8080

You should see

Because you are communicating within the cluster, you must use the port/targetPort port numbers not the NodePort number.

If you were communicating from outside of the clsuter, you would use the cluster IP address and the NodePort number, in this example it’s 30080. Try it, from the host terminal type minikube ip to get the minikube cluster address. On my machine this is 192.168.49.2. So in the shell I would type curl 192.168.49.2:30080. It would give me the same result as typing curl 10.106.25.240:8080. Try it again from another container not related to the echoserver.

You have successfully communicated with another POD that has made itself visible to all other PODs because it is using a NodePort Service.

Go back to the Service page and select the yml-remote-echoserver-ep

In the details page select the POD

In the PODs page, select the exec option (upper right) to open a shell into the POD

From here type curl <LoadBalancer IP>:38090 so from the example above it will be

curl 10.111.93.90:38090

You should see

So what’s actually happening here?

Scenario 1

A NodePort service exposes a POD to other PODs within the Node Cluster (the Node cluster here being minikube). Other PODs can communicate with it via one of two addresses; the Node’s cluster address and NodePort ports.nodePort or the POD’s Cluster address and NodePort ports.port, in the above these are 192.168.49.2:30080 and 10.106.25.240:8080 respectively. It works as follows -

To test this

  1. Open a shell into the hello-minikube pod (hello-minikube has nothing to do with this current deployment)

  2. curl 10.106.25.240:8080 - you should see the webpage being printed on the screen

  3. From your host machine scale down the remote echoserver kubectl scale deployment/yml-echoserver-deployment-v2.2 --replicas=0

  4. Go back to the hello-minikube pod's shell

  5. curl 10.106.25.240:8080 - you should get a connection refused

  6. From your host machine return to remote echoserver back to original scaled value kubectl scale deployment/yml-echoserver-deployment-v2.2 --replicas=1

To test this

  1. Open a shell into the hello-minikube pod (hello-minikube has nothing to do with this current deployment)

  2. curl 192.168.49.2:30080 - you should see the webpage being printed on the screen

  3. From your host machine scale down the remote echoserver kubectl scale deployment/yml-echoserver-deployment-v2.2 --replicas=0

  4. Go back to the hello-minikube pod's shell

  5. curl 192.168.49.2:30080 - you should get a connection refused

  6. From your host machine return to remote echoserver back to original scaled value kubectl scale deployment/yml-echoserver-deployment-v2.2 --replicas=1

Scenario 2

A LoadBalancer service exposes a POD to the outside world. K8 will ensure that the exposed address is mapped to the host machine's address, assigning to it the desired port on the host machine. Other PODs can communicate with it via one of two addresses; localhost on the host machine and NodePort ports.nodePort or the POD’s Cluster address and LoadBalancer ports.port, in the above these are 127.0.0.1:38090 and 10.111.93.90:38090 respectively. It works as follows -

To test this

  1. Open a shell into the hello-minikube pod (hello-minikube has nothing to do with this current deployment)

  2. curl 10.111.93.90:38090 - you should see the webpage being printed on the screen

  3. From your host machine scale down the remote echoserver kubectl scale deployment/yml-echoserver-deployment-v2.1 --replicas=0

  4. Go back to the hello-minikube pod's shell

  5. curl 10.111.93.90:38090 - you should get a connection refused

  6. From your host machine return to remote echoserver back to original scaled value kubectl scale deployment/yml-echoserver-deployment-v2.1 --replicas=1

To test this

  1. Open a shell on your host machine

  2. curl 127.0.0.1:38090 - you should see the webpage being printed on the screen

  3. From your host machine scale down the remote echoserver kubectl scale deployment/yml-echoserver-deployment-v2.1 --replicas=0

  4. from the shell on your host machine

  5. curl 127.0.0.1:38090 - you should get a connection refused

  6. From your host machine return to remote echoserver back to original scaled value kubectl scale deployment/yml-echoserver-deployment-v2.1 --replicas=1

Your Task

Setup up pingme to communicate in pairs as you did with docker-compose. You are free to choose your approach to the content of your YAML file i.e. a single YAML file with everything in it, or several YAML files each dealing with one aspect of the overall application architecture. You must meet these objectives

  • the same application pingme is deployed twice, each deployment is a single pod

  • you must be able to identify within K8 (through the shell or minikube dashboard) the local and remote instance

  • you must be able to check the status of either instance using /api/status

  • you must be able to get the local time from both instances - /api/time

  • you must be able to get the remote time from the local instance using /api/timefromhelper, it should NOT return the time prefixed with **

  • you must be able to get the remote time from the remote instance using /api/timefromhelper, it should return the time prefixed with **