Tutorial Highlights & Transcript

  • 00:00 - What is a Kubernetes Service and When Do We Need It?

    • My name is Muhammad Furrokh and today I am presenting the Kubernetes services. The topic is Kubernetes services and I will try to give a brief overview of Kubernetes services. I hope by the end of this presentation, you will have a great understanding of Kubernetes services and will be able to use them in practice.

      What is a Kubernetes service and when do we need it? What are the different kinds of Kubernetes services components that we have? We will discuss and I will try to give a general overview of the services. We will have a short demo, as well.

      Starting from the pod, as we know that the pod is the basic component, the basic unit of Kubernetes. The pod has its own IP address. But the issue with the pod is that whenever it's failed or is destroyed, it gets a new IP. It's very difficult for the client or for the team to manage the new IPs. In this regard, Kubernetes provides us with a solution with the services. What advantage the services component gives us is that it provides a stable IP, and load balancing capacity. The client does not need to remember the IP addresses of the pods. In the case that it wants to use the services, it just needs to remember one single IP.

  • 01:47 - ClusterIP Service

    • I'm going to present ClusterIP services. ClusterIP is the default type of Kubernetes service. If we do not specify any type in the definition for a document file, then Kubernetes assumes that we are going to use ClusterIP.

  • 02:10 - Basic Understanding of Traffic Reachability

    • The question arises of how the services will attach to the pods, when to attach the pods, and where to attach the pods. For example, you can see in the presentation slide on the left-hand side, I have a pod here and it has two containers. We have exposed two pods 3000 and 9000. I have to create a service. How will I be able to add this pod or the endpoint to my service? Kubernetes gives us this opportunity with the selector and labels we have. As you can see on the right side of the slide, in services, we use the selector to select the pods using the labels. The other point is the pods that we are exposing.

  • 03:16 - How External Traffic will Reach the Pod

    • Let's suppose for the sake of dialogue when we say that we have a microservice, which is exposed to the browser, and the client is going to access that service. I'm using here Ingress, but we will discuss this ingress in some other demo. Today, we are going to discuss the internal services, and how the traffic moves from ClusterIP service to the pods. We can say for the sake of dialogue that the traffic has reached the ClusterIP on pod 3200 on a specific IP 10 which is written 10.128.8.64. Our purpose is to look at how this service will forward the traffic to the pods.

  • 04:07 - How Traffic will Reach the Pod

    • As I told you in previous slides services use two mechanisms to select the pods and to forward the traffic. If we say that numerically the service will select the pods depending on the labels. For example, I am showing you that I have a service and I am using a selector, I'm using labels, and microservice. You can see that in the example that pods also have labels attached with App micro with key values on microservice one. The service will select the pods as an endpoint registered and forward traffic on the target pods which I have defined. In the definition files, you can see that the target pod is 3000. The service will select the pods on the basis of the selectors and forward traffic to the target pod.

  • 05:12 - Service - Load Balancer

    • We do not use node pod in production or development because node pod exposes the node directly to the client and we do not want this so we use a LoadBalancer. The LoadBalancer adds an extra layer. Client traffic will directly come on the LoadBalancer and then forward distant traffic to the services. When we are creating a LoadBalancer, the node pod and the ClusterIP will dynamically create it by themselves.

  • 06:03 -Demo

    • I have taken a super simple application hello world and this is my deployment. You can see that I have just added one replica for the sake of this demo presentation. I have added this label to the pods. My service will use selectors and match the labels with these labels and will add the port or replicas and its registration. This is my simple definition of deployment and this is the service that I have created. I am using a LoadBalancer and you can see that I am using a selector over here. On the basis of these labels, the replicas or the ports will be added with this service and I am assigning port 80 to the LoadBalancer and this port with the target port with the containers, so two ports. I have already run the service and similar files. You can see that I have created my pods and my pod is already running. This is the service I get. This is the external IP on which the client will send its traffic. You can see that dynamically, it has created this port, which is the LoadBalancer, and NodePort and this is the traffic I have access to. You can see from here as well that this is the page the client will send us traffic.

Muhammad Farrukh

Muhammad Farrukh

Site Reliability Engineer

nClouds

Muhammad is a Site Reliability Engineer at nClouds.

Contact Us Now

You can also email us directly at sales@nclouds.com for your inquiries or use the form below

Subscribe to Our Newsletter

Join our community of DevOps enthusiasts. Get free tips, advice, and insights from our industry-leading team of AWS experts.