well, you either implement with rancher (assuming you've set up rancher somewhere, e.g. DO), or with microk8s.
Stop and disable the multipath service
sudo systemctl stop multipathd.socket
sudo systemctl stop multipathd.service
sudo systemctl disable multipathd.socket
sudo systemctl disable multipathd.service
Prevent multipath from loading at boot
sudo nano /etc/multipath.conf
Add the following minimal config to explicitly blacklist all devices:
defaults {
user_friendly_names yes
find_multipaths no
}
blacklist {
devnode ".*"
}
Then rebuild the initramfs so it doesn't load multipath on boot:
echo "blacklist dm_multipath" | sudo tee /etc/modprobe.d/blacklist-multipath.conf
sudo update-initramfs -u
inblock ufw: sudo ufw allow from 127.0.0.1 port 19001
in microk8s, enable:
- dashboard
- dns
- metallb:10.100.0.200-10.100.0.254
- ingress
- cert-manager
Apply metallb address pool: kubectl apply -f addresspool.yaml
install cluster issuer for cert-manager
store kubectl config:
cd $HOME
mkdir .kube
cd .kube
microk8s config > config
alias kubectl and helm - add to ~/.bash_aliases or ~/.bashrc
alias kubectl='microk8s kubectl'
alias helm='microk8s helm'
if implementing Longhorn on Microk8s remember that you need to provide the kubelet path because it's in a non-standard path, and i've mounted a separate drive at /mnt/longhorn
follow main instructions from https://longhorn.io/docs/1.8.1/deploy/install/install-with-helm/ but basically:
microk8s helm3 repo add longhorn https://charts.longhorn.io
microk8s helm3 repo update
microk8s kubectl create namespace longhorn-system
# for testing use "/tmp/longhorn" as storage location
microk8s helm3 install longhorn longhorn/longhorn --namespace longhorn-system \
--set defaultSettings.defaultDataPath="/mnt/longhorn" \
--set csi.kubeletRootDir="/var/snap/microk8s/common/var/lib/kubelet"read longhorn-aws-secret.yml to create the AWS backup secret and store it - this includes the URL for the target
install longhorn-ingress.yml - remember to create the secret
USER=<USERNAME_HERE>; PASSWORD=<PASSWORD_HERE>; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth
kubectl -n longhorn-system create secret generic basic-auth-longhorn --from-file=auth
longhorn backup config:
- backup target: s3://@/ e.g.
s3://longhorn-backup@nl-ams-1.linodeobjects.com/ - backup target credential secret:
aws-secretorlinode-secret
install CSI driver for NFS: https://microk8s.io/docs/nfs
microk8s enable helm3
microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
microk8s helm3 repo update
microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
--namespace kube-system \
--set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet
microk8s kubectl wait pod --selector app.kubernetes.io/name=csi-driver-nfs --for condition=ready --namespace kube-system
Because of CVE-2021-25742 you need to enable annotation snippets:
kubectl patch configmap nginx-load-balancer-microk8s-conf \
-n ingress \
--type merge \
-p '{"data":{"allow-snippet-annotations":"true"}}'
then restart the deployment
kubectl -n ingress rollout restart daemonset nginx-ingress-microk8s-controller
check for pods with
kubectl get pods -n ingress
- Create admin user
htpasswd -c auth rob - Create secret
kubectl create secret generic basic-auth-dashboard --from-file=auth -n kube-system- (you might need
sudo apt-get install apache2-utils)
- (you might need
- ignore default which is
microk8s kubectl describe secret -n kube-system microk8s-dashboard-token - retrieve dashbaord token with
microk8s kubectl create token default --duration 87600h - install
dashboard-ingress.yml
create downloads namespace:
kubectl create namespace downloads
## Install keel to manage automatic updates
- Add the Helm repo and install Keel
helm repo add keel https://charts.keel.sh
helm repo update
helm upgrade --install keel keel/keel \
--namespace=keel \
--create-namespace \
--set helmProvider.enabled="false"helmProvider.enabled=false is correct since i'm using plain Kubernetes manifests, not Helm-managed releases.
- Annotate each deployment (if not already done)
metadata:
annotations:
keel.sh/policy: force
keel.sh/match-tag: "true"
keel.sh/trigger: poll
keel.sh/pollSchedule: "@every 6h"force is the policy i want. It detects SHA digest changes on the same tag, which is exactly what happens when linuxserver or wherever pushes a new build behind :latest.
match-tag: "true" is required when tracking :latest. Without it, keel's force policy watches the image name (across all tags), finds the newest digest, then rewrites the deployment's tag to whatever is in the image's org.opencontainers.image.version label. For linuxserver images that label is the upstream app version (e.g. 3.17.0 for nzbhydra2's core jar), which happens to also exist as an ancient Docker Hub tag — so keel will silently downgrade you to a years-old image. With match-tag: "true", keel watches the specific :latest tag and keeps the tag name intact on update.
- Confirm
imagePullPolicy: Alwayson every container spec using:latest.
Without it, the kubelet may use its local cache even after Keel triggers the rollout.