For a long time I've been interested in
running a Kubernetes cluster on a stack
of inexpensive raspberry PI's. Following
various tutorials on the web, I was
actually able to get Kubernetes
installed and working on a 3 PI cluster
but the CPU and RAM requirements,
especially on the master node just
overwhelmed my poor PI.   This caused
very poor performance when I was trying
to do Kubernetes tasks like deployments.
It also made an in-place upgrade of the
Kubernetes version impossible. As
a result I was very excited to see the
k3s project from Rancher Labs. k3s is
billed as a lightweight Kubernetes for
running in resource constrained
environments. It's also optimized for the
ARM platform. This makes running a
Raspberry Pi Kubernetes cluster much
more feasible. In fact we're going to
build one in this video.
Let's gather the materials we'll need to
build our cluster. We'll need at least
one Raspberry Pi.
We'll need an Ethernet cable for each Pi. And
we'll need a switch or a router to
network them all together. We will be
installing k3s from the internet so we
need to be able to get to the Internet
through the router.
Our cluster will use three Pis. The
first one we're going to call kmaster.
It'll be the master node we're going to
assign it a static IP of 192.168.0.50.
The first worker node we'll call knode1.
We'll assign it a static IP of 192.168.0.51
And finally the third node which will
be our second worker node we'll assign an
IP of 192.168.0.52. I'd like to access
these nodes by name instead of using
their IP address over and over. An easy
way to do that is to put the IP address
and host names in /etc/hosts on my PC.
We can do that with a simple echo
command and filter it through the tee
command using the append option -a.
I'm going to put in the address  for kmaster
into /etc/hosts. Then I'll just do the same
thing for knode1 and knode2.
We'll dump them out you can see they are now in /etc/hosts and I can now access them by name.
Now we're ready to install the
master node. The first step of which is
installing Raspbian. I have a video that
shows how to do this in detail that I'll
link here and in the description. I will
go ahead and show the install here but
I'm gonna do it quickly without full
explanations. If you want to skip this
section because you already have your PI
set up
click the "install k3s on the master" link
in the description.
so first let's download Raspbian Lite
from the Raspberry Pi website.
Once we downloaded it it's a good idea
to check the sha-256 sum to make sure
you don't have a corrupted download.
so now let's unzip the zip file and this
will give us an image file that we can
write directly to our SD card.
I've put in my SD card
I'm just gonna check dmesg here to
see which device I need to deal with. And
it looks like it's /dev/sde.
So writing to the SD card is a simple
matter of using dd. We use the input file
of the image we just downloaded and
extracted and we use the output file of
/dev/sde. But make sure you pick the
right device! Make sure you check in dmesg that you have the right one.
Once that's done ,the first step to do is
to enable the SSH server. I'm going to pop
out my SD card and put it back in and
then this will remount with the image
that I just wrote.
Then we will touch in my case /media/carpie/boot/ssh.
This just
creates an SSH file on the boot
partition which will cause Raspbian
to start an SSH server on startup.
Next we want to go ahead and set the
static IP while we have the SD card
mounted. We can do that by editing the
rootfss partition /etc/dhcpcd.conf.
At the end we'll set interface eth0. We'll
set it to a static IP.
this is the master so we're going to do
192.168.0.50. We'll give it the netmask of 24 bits.
And we'll set static routers, my router
in this case, as 192.168.0.1.
Finally we'll set the static domain
server.
And it's just my router address in this
example.
Let's sync our file systems.
Unmount both partitions /dev/sde1 and /dev/sde2 in my
case.
Pop out our SD card now we can use it to
boot the Pi.
Once the Pi is booted, we should be able
to ssh to it by name.
The default password is raspberry.
Everyone knows this, so our first order
of business is to get that changed.
We'll use the raspi-config tool to do a
little setup.
First up, changing the password.
Then we want to set our hostname just so
that it matches the name that we gave
the device. In this case, it's kmaster.
I'm going to set some localization
options because I'm in the US. I'm
gonna set my locale to be US utf-8.
The last thing I'm gonna set is the
memory split. I don't expect to do many
GPU intensive things, so I'm just gonna
set the memory split it down to 16 MB
for the GPU. And we'll just save and exit
and reboot.
So Raspbian is installed and set up. Now
it's time to install k3s. The first thing
that we want to do is to ssh to our
Pi,  kmaster Pi.
Once we do that, we're just going to
follow the instructions at the k3s site.
Rancher Labs has made this really really
simple. We'll run this curl command. You
get the installer and pipe it through
shell and then we just wait.
And it's done. So at this point if you're
just setting up a single-node Kubernetes
cluster you're finished.
Let's take a look though and see what we
have here so we're gonna run kubectl
on the master. We'll get the nodes
and you can see there is our master node.
We're ready to go with a single node. Of
course we want to add a couple of worker
nodes. So let's go ahead and do that. So
in order for those nodes to join the
master as part of this cluster we need
to extract the join token that exists on
the master. All we need to do is cat it
out and then we'll save this token for
later use.
Now we want to install the worker nodes.
so the first step again is to install
Raspbian. We want to enable the SSH
server, set a static IP, and do our raspi-config
tasks. Installing Raspbian on the
worker nodes is exactly the same as
installing Raspbian on the master. The
only things you'll need to change are
the host names and IPs. So go ahead and install
Raspbian on your worker nodes, ever how
many you have, and then come back and
we'll install k3s on each node. OK,
let's install k3s on our first worker
node.
We'll ssh to knode1. We're gonna
run a very similar curl command. Again,
this comes from the Rancher Lab site.
Before we pass the command to our shell,
we're gonna set a couple of environment
variables. One is the k3s URL. We're going
to point that at our master node. Now the
worker node doesn't know the master by
name so we'll use the IP address here.
Also we're gonna pass in the k3s token
that we copied earlier. This is the
special token that allows the worker
node to join our cluster.
And we'll let that run and it's done.
It's that simple.
:et's go ahead and do that to node 2.
We'll do that to node 2 if I can type the
right password.
And the third time's the charm!
We're gonna run exactly the same command
as we did on the first node.
And that's it. It's installed. We'll drop
back to our PC and check things out.
We'd like to be able to control our
cluster from our PC and not have to log
into the master node. So in order to do
that, we're going to put kubectl on
our computer. In order for kubectl
to access the cluster it needs some
configuration. that configuration exists
on the master node. So let's ssh
to the master.
We'll copy out that configuration file
from /etc/rancher/k3s/k3s.yaml. Just copy
that.
And back on our PC we'll need to make a
directory for the kubectl
called .kube. then will edit a file
called .kube/config. We're just gonna
paste in that config. The only thing that
we need to change is the server name
which is going to be kmaster:6443.
And we can use kmaster here, remember,
because we set that hostname in our
/etc/hosts earlier. Let's add a little bit
of security to this file just by chmod-ing
it to 600.
OK, now on this PC I don't currently
have the kubectl utility installed.
So I'm just going to follow the
directions at the Kubernetes site. I'll
link those in the description. I'm
running Linux Mint which is an Ubuntu
derivative so I will follow the
instructions to add a debian
repository to install kubectl from.
Once that's done I can use the standard
Ubuntu mechanism for installing. I'll do
an apt update and an apt install of kubectl.
OK, I should have kubectl on the PC.
Let's check out our cluster. We'll run
kubectl get nodes
we'll see we have our three nodes. We
have our master and two worker nodes.
Perfect. Now let's look at some of the
running nodes within the Kubernetes
system itself. the way we do that is
call kubectl get pods with a
parameter of --all-namespaces. You can see
there are several things running here in
the kube-system namespace. If we wanted to
look at our user pods we would just run kubectl get pods.
And, as you can see, we don't currently
have any user pods running.
You may have noticed some of the pods
running in the system namespace are for
Traefik. Traefik is a reverse proxy and
load balancer. It's not part of
Kubernetes. Kubernetes kind of leaves
that part up to you to define. Rancher
Labs have done us a huge favor by
including Traefik already installed and
configured. If you're not familiar with
Traefik, or reverse proxies in general,
let's look at why something like this is
needed. Let's create a quick pod running
the popular nginx webserver. There's
no need to follow along here unless you
want to. I'm just demonstrating the need
for a proxy.
You see the pod is in container creating.
So it's not quite up yet. Let's give it a
little time to come up.
OK, now it's running.
It's running here but it's running
inside the Kubernetes cluster. There's no
real way to get to it. We can see here if
we do a get pod -o wide that it is
running on knode1, but there's no good
way to get to the webserver itself.
Now Kubernetes does have built-in
methods to allow us to get to the web
server. One of them is the expose method.
So let's expose this as a NodePort
service.
Once we've done that, we can look at the
service and see that it's exposed on
port 32417 of knode1.
So now we can go there directly in the
web browser.
and there we see the web server. But who
wants to have to type in a port number
to go to a web server? We would like to
use the standard port 80. That's what
our proxy will provide us. Traefik will
provide us a single entry point for our
cluster. It'll let us use the standard port
80 and a route our traffic to the
correct node, in this case our web server,
no matter where it's running in the
system. No matter what port it's
really exposed on internally. Traefik
will also let us multiplex that port. We
can use host names and that kind of
thing to split traffic off. We'll
see how to use this feature in a future
video. That's all for this video. Thanks
for watching!
If you like this video and would like to see more,
please consider subscribing to my channel.
