Introduction to K3s MicroCloud

Tutorial Highlights & Transcript

  • 00:00 - Beginning of Video

    • So yeah, hello everyone. My name is Mayur Raiturkar. I'm working as a Team Lead for DevOps with the team Falcon. And today I'm going to be talking about K3s microcloud.

  • 00:23 - What is K3s Microcloud?

    • Okay, so what is K3s microcloud? So this is just a, this is a project that I've built during my free time. It's a hardware based Kubernetes cluster built using ARM 64 based computers. Okay. So what is K3s? So this is just a name that I've given and K3s is a Kubernetes distribution solution built by rancher. It's a lightweight, certified Kubernetes distribution, designed to run production grade workloads on resource constrained and edge devices. Okay, so it's designed for low end devices. Okay, and why are we using K3s in this project is because it's extremely lightweight. It's a single binary, less than 100 MB and the whole single binary consists of all the sub components required to run Kubernetes cluster. Okay, and specifically, this is optimized for ARM v7 and ARM 64 based architectures. Okay, so you can read more about this project. It's an open source project, you can find more about it on this link

  • 01:40 - Components of K3s?

    • These are the subcomponents of the K3s. So it's a single binary and it includes all the components that are required to run a production quality cluster, okay. So it has got container D, which is a container runtime environment, and then flannel for a CNI driver. And then core DNS required for DNS resolutions and then metrics. So all this, it has also got traffic for Ingress controller. Okay, so all these components are packaged in a single binary, and then we just need to install it and deploy it on the hardware and you get the Kubernetes cluster out of it.

  • 02:21 - Android media hardware to run a 5-node Kubernetes cluster

    • Okay, so let's talk about the hardware itself. So this is, this is the hardware that sort of this, this is the cluster, it's a five node cluster. This was built back in 2017, as part of one of the projects that I was working on. It was my university project where I had to run Apache Hadoop and MapReduce based workloads. So it's basically we were trying to run some big data benchmarking workloads. And we were, we did manage to successfully run it. And I was able to benchmark this whole solution and compare it with x86 based platforms. Okay, so this was lying around. So I thought maybe I should use the same hardware to run a Kubernetes cluster. Okay. So this is a five node cluster, it has caught 20 cores of ARM 64 CPUs. And the whole solution was built under $150. Okay, so it was. So the whole thing was built for under $150. And so I've managed to buy this at a very low cost on the internet. This can be like a replacement for a Raspberry Pi. But then this is the cost of Raspberry Pi, whenever I built this project, it would have cost, building a final cluster would have cost me like $500, or something, right? So I wanted to get this at a very low cost. So I got this at a very cheap rate on the internet. And then I managed to build it. So the problem with this hardware is that these are Android media boxes, this doesn't have official Linux support. So there are some community images, which I found on the internet, and I managed to run Debian 11 on each of these nodes. Okay. And yeah, so it has got Debian 11, which is the latest version of Debian and it also has, we are using the mainline Linux kernel as well for this hardware. Okay.

  • 04:24 - Network architecture including Cloudflare

    • So this is, so currently it's running Kubernetes. This is the network architecture. It's a very simple architecture right now. Eventually, I'm trying to upgrade this to something better, you know, to achieve high availability. So right now it's all Cloudflare. Okay, so the ingress itself, the DNS and the CDN provider. So we have CloudFlare, which terminates all the traffic. So all the incoming traffic first hits Cloudflare. And then I have an open WRT based router. It's a normal Linux based output router, which is also running on ARM 64 based microcomputer. Okay, so this traffic hits. So we have multiple ISVs and then it does some kind of hybrid high availability setup, right. So traffic hits open WRT, and then this is an unmanaged switch that we saw earlier in the picture. And so and then we have four nodes, right now there are four nodes in this cluster, this is the master node, and then there are three slave nodes running as part of this four node cluster. Okay.

  • 05:34 - Demo tutorial on using K3s with Grafana and Prometheus

    • So that's all for the theory part, I'll quickly talk about how I can show this cluster in action. So it's running a couple of workloads. So let me just SSH into the SSH into the master node, just to see what's going there. So it's, it's K3s-master. Okay. So if you can just run Kube CTL, get nodes, and then hyphens, zero hyphen, all white. Then we can see. So right now we have four nodes. So this is master and then three slave nodes. So okay, these are running the 1.22.5 version of Kubernetes. And this is all Debian 11 nodes, and it's running 5.9, Linux kernel, and we can see, it's running, it's using Container D as the container runtime instead of Docker. Okay, so this is all coming as part of the single package itself as a single binary. Okay, and then let's try to see pods that are running on this cluster right now.

      Okay, so we can see here, there are some applications that are already running on this cluster. And we can see these are all distributed across different nodes. So this is the master node, and then we have slave one slave to slave three. Okay. So this cluster has been running for almost 20 days. Now, if you see here, it's been running for almost 1819 days. Right. So it appears to be running stable. So far, I didn't see any problems. So some of the workloads, yeah, so there were some restarts and stuff, but then, yeah, so it runs, it's been stable. So far, all the containers are running, as we can see here. Okay, and then I can show the part where so. So there are a couple of workloads that are just for the demonstration purpose, I have deployed Grafana Prometheus, okay. So this is Grafana. So it's part of the Prometheus stack that I've deployed. So these are the pre predefined dashboards, let's just try to see if we can get some data out of here. Here, this is the node exporters which are running on each of the nodes. So we can see this is the master node, and there is almost 80% of the memory footprint here. And so if I try to select some slave node, then it's comparatively lower. So the memory usage is much lower. So there is some room for additional applications that can be deployed here on this cluster. So yeah, so this, this is just the node exporter data here, we can also look at some of the other dashboards quickly. So this is the pod. Okay, so this is just our networking. Pod networking. Yeah. So we can see all the traffic that is flowing through different ports. And things. Yeah, so this is the Maria DB database that's running here. Okay. Similarly, this is Prometheus, which is running on this cluster right now, as we can see your targets, it appears to be healthy. So far, there are no unhealthy targets. So which, which makes it appear as if it's running. It's quite, it's stable so far, I am not seeing any issues. So yep. And then we have an alert manager and everything deployed here as part of the Kube Prometheus stack. Okay. Another application that I've deployed is WordPress, just to demonstrate the capability of this solution, right. It runs a full fledged WordPress website, okay. With the Maria DB as the database, okay. And we can see this is actually running in production. Yep. And we can Yeah, this is just a sample deployment right now. I similarly, this is another application that I've deployed. It's called Google. It's like a cleaner version of Google. So we can use this. We can Google something here, and then it's going to give us the response. Okay. Yeah.

  • 10:16 - Using KubeSail to remotely manage a Kubernetes cluster

    • So there is another tool that I have deployed, which is called KubeSail. Okay, so KubeSail is a solution. It's a SaaS based solution, which allows us to remotely manage a Kubernetes cluster. And it's specifically designed for embedded hardware, such as Raspberry Pi. So this is the hardware that they have designed themselves. It's on now. Yeah, so it was on Kickstarter as well right. So, this is nothing but a Raspberry Pi, this is a Raspberry Pi compute module, okay. And this, so this can be used for any cluster need not be there specific hardware not need not be pi pi box, it can be deployed anywhere. So right now, I've connected my cluster to this, the setup as well. As we can see here, this is the cluster. And we can see all the hardware, also all the namespaces and things, right? So we can deploy applications as well from this platform.

      For example, these are templates which are offered out of the box. So it's a one click deployment. So we just need to select one of the applications. Let's say I want to deploy a Minecraft server Minecraft server, right? You can just click on this and then we can deploy it directly from here. Okay, so I won't deploy it right now because it's going to take some time, but I just wanted to show how this works, okay. So, these are some official templates which are available, we can just and this can be this specific solution. KubeSail can be used for any cluster. It may not be a self hosted cluster, it can be used for Eks as well. So there are no constraints on that. We can use this for any platform out there, as long as it's Kubernetes

Mayur Raiturkar

Mayur Raiturkar

DevOps Team Lead


Mayur is a Team Lead, DevOps on the Falcon team at nClouds and an AWS Certified Solutions Architect - Associate and Certified Kubernetes Security Specialist.

Contact Us Now

You can also email us directly at for your inquiries or use the form below

Subscribe to Our Newsletter

Join our community of DevOps enthusiasts. Get free tips, advice, and insights from our industry-leading team of AWS experts.