Kubernetes/Kubernetes Workshop/Build a service application on Kubernetes
At the end of this module, you should be able to:
- Run a web server application on Kubernetes (minikube).
- Understand service networking on k8s.
- Run replicas of an application.
- Access and create logs on a pod
Run and Access Your Service Applications on a Web Browser
Kubernetes(k8s) has extensive support for running service applications such as a web server. k8s takes care of scaling and failover for your application and provides deployment patterns, automated restarts, load balancing, and dynamic scaling, among other services.
Build and Access the Application via the Internet (Dockerfile)
In this step, you will:
- Set up an apache web server using a Dockerfile.
- Build your web server on Docker's Ubuntu image.
You can reference the previous module for a Dockerfile refresher.
- Create an empty directory and a Dockerfile using your preferred text editor, e.g. vim:
FROM ubuntu ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update RUN apt-get install -y apache2 RUN echo '<html><body>Hello World</body></html>' >/var/www/html/index.html RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf RUN service apache2 restart EXPOSE 80 CMD ["apachectl","-DFOREGROUND"]
- Build and run your image on a public port:
$ docker login $ docker build -t <your-dockerhub-username>/<given-image-name>:<tag> . ---> b9818132e617 Successfully built b9818132e617 Successfully tagged <your-dockerhub-username>/<given-image-name>:<tag> $ docker image ls $ docker run -p 80:80 <image_id>
ARG DEBIAN_FRONTEND=noninteractiveis there to prevent apt-get from prompting the user during installs.
- If you have previously started a container using port 80, you will get an error stating the port is already allocated. In that case, stop the running containers by:
$ docker ps $ docker stop <id>
- Lastly, access your web server by running:
$ curl http://localhost <html><body>Hello World</body></html>
Step 1 - Working with Kubernetes Pods
You can now run the service on minikube following the steps listed in Module 1, and return to this module to expose the pod on a public port by creating a service.
- Run the image on the cluster and expose the running pod on port 80:
$ kubectl run <pod_name> --image=<your_username>/<image_name>:<tag> --port=80 pod/<pod_name> created $ kubectl expose pod <pod_name> --type=LoadBalancer --port=80 service/<pod_name> exposed
- Get the cluster’s URL:
$ minikube service <pod_name> --url http://192.168.49.2:31688 # This is sample output, yours may vary $ curl <service_url> <html><body>Hello World</body></html>
You can also fetch the above via a web browser.
- To find out more information about the Pods and the Service, check out the cheat sheet.
- The --port=80 parameter in the command
kubectl run <pod_name>is the port that the service in the container is running on (apache by default listens on port 80). In contrast, in Module 1 the application did not listen on any port.
--port=80 parameterin the command
kubectl expose pod <pod_name>is the port that the Kubernetes service will listen on. A Kubernetes Service fronts the Pod (or Pods) with a stable IP, DNS name, and port. It also load-balances traffic to Pods.
Create a Deployment
In the previous section, we packaged our application as a container and ran is as a Pod through a Service. While you could run Pods as-is on Kubernetes, this is not very useful. Static Pods don't give you any of the benefits associated with cloud-native applications: self-healing, scaling (replication), easy updates and rollbacks. In practice, you will most often wrap Pods inside a Deployment object. Under the hood, Deployments use another object called a ReplicaSet. ReplicaSets are responsible for self-healing and scaling, while Deployments manage ReplicaSets and bring rollouts and rollbacks.
Let's now create a Deployment with a single Pod replica to see this in action. We will then scale up this Deployment from 1 to 5 replicas.
- Create a Deployment using the Docker image from the previous section:
$ kubectl create deployment <deployment_name> --image=<your_username>/<image_name>:<tag> deployment.apps/<deployment_name> created
- List and describe your Pods and Deployments e.g. if you specified “vol2” as <deployment_name>:
$ kubectl get pods NAME READY STATUS RESTARTS AGE vol2-779c5f4b96-7vg8q 1/1 Running 0 4s $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE vol2 1/1 1 1 105s $ kubectl describe deployment <deployment_name> Name: vol2 Namespace: default CreationTimestamp: Tue, 26 Jul 2022 11:39:05 +0000 Labels: app=vol2 Annotations: deployment.kubernetes.io/revision: 1 Selector: app=vol2 Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 <...>
After the initial rollout, we have one replica of the app running. We will now perform a scaling operation.
- Run the following command to scale up to 5 and verify the operation:
$ kubectl scale --current-replicas=1 --replicas=5 deployment <deployment_name> deployment.apps/<deployment_name> scaled $ kubectl get deployment <deployment_name> NAME READY UP-TO-DATE AVAILABLE AGE vol2. 5/5 5 5 56m
--current-replicas is a safeguard. The point is that the action will not be taken unless the actual number of replicas matches what --current-replicas specifies.
- Expose the Deployment on port 80 through a Service:
$ kubectl expose deployment <deployment_name> --type=LoadBalancer --port=80 service/<deployment_name> exposed
- Get your Deployment’s URL:
$ minikube service <deployment_name> --url $ curl <service_url> <html><body>Hello World</body></html>
- You can verify that all replicated Pods received Deployments by changing the contents of any page served on Pod:
$ kubectl get pods $ kubectl exec -it <chosen_pod_name> - /bin/bash $ cd /var/www/html echo "<html><body>Hello 2nd World</body></html>" > ./index.html $ exit minikube service <deployment_name> --url $ curl <service_url>
Note: Run the
curl command according to the number of replicas you created
- Shut down minikube after you have practiced to your satisfaction:
$ minikube stop
Module 3: Setting up Infrastructure as Code (IaC) in Kubernetes