Take me to the step by step guide, please. Part 1
So, Covid-19 hits the world and what was supposed to be our yearly vacation to Spain got ruined. We are a couple of guys that each year goes to the Canary island, rent a cabriolet, a hotel room with a sea view high up. just hanging out, eat good food, and enjoy the often warm and Swedish summer-like weather. We are all good friends and all developers in different fields, so we usually talk a lot about code/technical stuff when we hang out. This year we hit quarantine and was put inside a hotel closed to the public, not allowed to go outside the hotel.
Not a good vacation and instead we spend a lot of time together just discussing different projects that would be fun to play with between discussions if we would get back home or not.
Job wise I got a bit tired of being a consultant and being thrown around and pushed to the limit on most projects I have worked within the last couple of years. I felt I was time for changed and started a new position at one of the largest travel agencies in the world. I have not been writing something new for a while, but I still been busy. Moreover, I worked even more with micro-services and performance which I enjoyed. I also spend a lot of time creating an architecture for the project moving everything to AWS so we can utilize it fully. After six months I got a good offer, so I just took it and are now starting a new journey working with a big financial advisor with a new product quite like Benify.
So, what does that have to do with Raspberry Pi, Traefik, and Docker you might ask? With Covid-19 hitting the world I have spent quite some time home after returning home from quarantine. I have family in the risk group, so what started as a crazy idea in Spain did not seem too crazy when you have to stay at home for a couple of weeks, and even if I can go out I do not. Since I want to be able to see my family. I got bored quite fast and wanted something fun to spend some time on. What was the idea?
I have worked a lot with micro-service and docker. I also like IoT and have some Raspberry pi´s lying around. The raspberry pi´s are great but when trying to do too much you get problems with Raspberry pi´s low amount of RAM. It is fine if you just use it to put a sensor on and monitor it by queue with techniques like MQTT. I have a Bosch 680 sensor doing this to keep track of the indoor quality. But when Raspberry pi 4 hit the market with up to 4 GB of RAM it starts to get interesting. With 4 GBs of ram, we get in the ballpark for a mini cluster. It is cheap and a fun project if you got the time and don't go with the normal cloud setup. The raspberry pi foundation hosted their website hosting 10k+ users with a cluster with pi´s
Talking to one of my best friends Mattias in quarantine in Spain I mentioned I wanted to buy some Raspberry Pi 4 and hook it up with docker and using swarm mode. Pi clusters are not something new, it been a thing since the first Raspberry Pi hit the market but now with more RAM and some serious compute power, I find it more interesting. After discussions about a cluster we talked about AZ to make it more robust, we got to AZ (Availability zones) having a cluster, especially at home, a big vulnerability is if you have an outage or your network goes down. An AZ would deal with this, having another location where you can spin up your environment if one location goes down. What about storage? Persistence?
In docker swarm running a cluster, you have managers or workers nodes; managers handle the cluster and workloads that need persistence or not stateless by nature. For example, I got an Edge router X and Ubiquiti UniFi AP AC Lite that requires a controller that you can host in docker, but since all config and settings are persisted and require state you cannot have multiple instances for it. In this case, we will just put one instance on a manager node. Stateless applications, service, functions that use external data sources can be put on worker nodes and be replicated into multiple docker swarm nodes.
If the manager node is unavailable, hardware failure or the container crashes it will spin up on another node with no access to the data since all data is stored in the containers and the controller will lose all settings. To handle this, we are going to use Gluster FS to replicate the filesystem to enable each node to reach the data. In this way, we can spin up a container on another node but still have access to the same state the manager node had before it died. Hence the 5 Kingston USB memory, is used by Gluster FS as a disk where we replicate data and attach to the containers.
So, what was a crazy idea in Spain is now in the making! So, what are we going to be looking at? We are going to use a Raspberry Pi cluster built by the following:
- 3 Raspberry pi 4 4 GB RAM
- 1 Raspberry Pi 3 1 GB RAM
- 1 Raspberry Pi 2 1 GB RAM
- 1 Kingston Datatraveler G4 64GB (USB 3.0)
- 4 Kingston DataTraveler G3 64GB (USB 3.0) (they only got 4 in stock)
- 2 Raspberry Pi cluster racks
- 3 RPI chargers USB-C
- 2 RPI chargers micro USB
- 5 micro SD cards with 32 GB
- 1 Netgear GS108GE v4 ProSafe - 8-Port / Gigabit Switch / Unmanaged
- 6 CAT 7 UDP cables
I am currently running:
- Edge Router X connected to Fiber 250/250 Mbit running gigabit full duplex on its internal network
- Ubiquiti UniFi AP AC Lite for wireless connection with a controller to manage the hardware running in the cluster.
What kind of workloads would I like to host on this swarm? Some examples that I am going to host:
- Unifi controller
- Home assistant
- This blog - Ghost platform
- Traefik for load balancing
- Portainer for managing stacks in the swarm.
- Databases for other projects - Mongo DB and Redis in cluster mode*
- MQTT for internal sensors sending data.
If cooling is not a problem with the fans bundled with the rack, we will also look a bit on overclocking the Pi´s. to get that extra cream :)
This will be the first post of the series on how to make all this work. I have already tried it out on a small scale but not fully.