Commit b14ef11d authored by Your Name's avatar Your Name

ha cluster script update

parent 9bb107af
......@@ -71,3 +71,110 @@ ExecStart=/usr/bin/etcd \
[Install]
WantedBy=multi-user.target
-------------------------------------
setup shit that dont work:
apparently we can run:
docker pull ghcr.io/kube-vip/kube-vip:main
then:
sudo docker run --network host --rm ghcr.io/kube-vip/kube-vip:main manifest pod --interface enp8s0 --vip 192.168.1.240 --arp --leaderElection | sudo tee /etc/kubernetes/manifests/vip.yaml
to get an image that automatically populates the manifests directory for us each time we init w kubeadm... or so they say
--------------------------------
Steps to get load balancer to work, from https://blog.scottlowe.org/2019/08/12/converting-kubernetes-to-ha-control-plane/, we apparently need to start up a cluster and load the kube-vip pod first, THEN move the server address over to the load balancer. upon adding additional control plane nodes we can use the kubeadm config file, but for the first one we maybe need to do this process.
------------------------------
ETCD annoyances. ok it looks like we might need to get fancy w the certs. There's also some really annoying stuff to do with the original etcd.service being re-made on restart.
how to make the certs and where to put them: https://thenewstack.io/tutorial-set-up-a-secure-and-highly-available-etcd-cluster/
how to make etcdctl work w certs: https://pkg.go.dev/go.etcd.io/etcd/etcdctl#section-readme
NEW new plan, make a local kubernetes cluster basic ass thing. then load up TWO countembaby two etcd clusters, one local to the first control plane and one linked between all control plane's to be
Also something to keep in mind, when we use fancy certs the etcdctl command will non obviously fail (making it look like a networking problem) if the certs are not provided on the cli. easy to gap this one and makes for weird debugging
FROM ETCD:
Disk
An etcd cluster is very sensitive to disk latencies. Since etcd must persist proposals to its log, disk activity from other processes may cause long fsync latencies. The upshot is etcd may miss heartbeats, causing request timeouts and temporary leader loss. An etcd server can sometimes stably run alongside these processes when given a high disk priority.
On Linux, etcd’s disk priority can be configured with ionice:
# best effort, highest priority
$ sudo ionice -c2 -n0 -p `pgrep etcd`
--------------
now getting some sort of error on the load balancer thingy (was using an old and busted version of etcd, updated. using old syntax on kube-vip, updated):
time="2022-03-12T22:54:03Z" level=info msg="server started"
time="2022-03-12T22:54:03Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2022-03-12T22:54:03Z" level=info msg="Namespace [kube-system], Hybrid mode [false]"
time="2022-03-12T22:54:03Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [pop-os]"
I0312 22:54:03.376835 1 leaderelection.go:248] attempting to acquire leader lease kube-system/plndr-cp-lock...
E0312 22:54:03.377316 1 leaderelection.go:330] error retrieving resource lock kube-system/plndr-cp-lock: Get "https://kubernetes:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock": dial tcp 127.0.0.1:6443: connect: connection refused
potential solution: https://forums.rancher.com/t/127-0-0-1-6443-was-refused/36353/2
---> from kubenetes kubeadm documentation (specing apiserver address worked for load balancer):
kubeadm init
It is usually sufficient to run kubeadm init without any flags, but in some cases you might like to override the default behaviour. Here we specify all the flags that can be used to customise the Kubernetes installation.
--apiserver-advertise-address
This is the address the API Server will advertise to other members of the cluster. This is also the address used to construct the suggested kubeadm join line at the end of the init process. If not set (or set to 0.0.0.0) then IP for the default interface will be used.
This address is also added to the certifcate that the API Server uses.
--apiserver-bind-port
The port that the API server will bind on. This defaults to 6443.
--apiserver-cert-extra-sans
Additional hostnames or IP addresses that should be added to the Subject Alternate Name section for the certificate that the API Server will use. If you expose the API Server through a load balancer and public DNS you could specify this with
--apiserver-cert-extra-sans=kubernetes.example.com,kube.example.com,10.100.245.1
--cert-dir
The path where to save and store the certificates. The default is “/etc/kubernetes/pki”.
--config
A kubeadm specific config file. This can be used to specify an extended set of options including passing arbitrary command line flags to the control plane components.
sudo rm -r /var/lib/etcdCluster/
sudo systemctl stop etcd3.service
sudo systemctl daemon-reload
sudo systemctl disable etcd3.service
----------------------------------------------------------------
----------------------------------------------------------------
ok so instead of tearing every last chunk of hair from my skull lets try a stacked etcd topology (the default behaviour of the beast). so now we dont spec any etcd stuff and let them live as pods, unbothered. and that seems to work ok, at least i can see the pods starting up and there's no api server cert issues that I can see. now, for the load balancer there is some weird password/username error.
we generate the manifest with:
sudo docker run --network host --rm ghcr.io/kube-vip/kube-vip:main manifest pod \
--vip 192.168.0.75 \
--arp \
--controlplane \
--leaderElection | sudo tee /etc/kubernetes/manifests/vip.yaml
which both prints the manifest and puts it into the dir for us. still using the kubadm config, we can bring everything up with
sudo kubeadm init -v 9 --config kubeadm-config.yaml --upload-certs
literally forgot the basic home folder set up steps last time so like the dont forget the fundamentals yo. further, the kubeadm configuration file do need to be on each control-plane to be
...
ok so the load balancer manifest likely needs to be put into the remote nodes as well before the join command fires. because i reset the initial node and subsequently kubectl tools stopped working on the secondary control-plane, also there wasn't a kube-vip pod running on the second-added plane.
ETCD_UNSUPPORTED_ARCH=arm64
# grab new token tbd
TOKEN="token-01"
CLUSTER_STATE=new
NAME_1=pop-os
NAME_2=rossetti
NAME_3=neruda
HOST_1=192.168.1.230
HOST_2=192.168.1.24
HOST_3=192.168.1.191
CLUSTER=pop-os=http://192.168.1.230:2380,rossetti=http://192.168.1.24:2380,neruda=http://192.168.1.191:2380
THIS_NAME=neruda
THIS_IP=192.168.1.191
......@@ -15,10 +15,10 @@ TimeoutStartSec=0
ExecStart=/usr/bin/etcd \
--name ${THIS_NAME} \
--data-dir /var/lib/etcd \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 \
--listen-client-urls http://${THIS_IP}:2379 \
--initial-advertise-peer-urls http://${THIS_IP}:4680 \
--listen-peer-urls http://${THIS_IP}:4680 \
--advertise-client-urls http://${THIS_IP}:4679 \
--listen-client-urls http://${THIS_IP}:4679 \
--initial-cluster '${CLUSTER}' \
--initial-cluster-token ${TOKEN} \
--initial-cluster-state ${CLUSTER_STATE}
......
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network-online.target local-fs.target remote-fs.target time-sync.target
Wants=network-online.target local-fs.target remote-fs.target time-sync.target
[Service]
EnvironmentFile=/etc/etcd.env
User=etcd
Type=notify
ExecStart=/usr/bin/etcd \
--name ${THIS_NAME} \
--data-dir /var/lib/etcdSingle \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 \
--listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster '${THIS_NAME}=http://${THIS_IP}:2380' \
--initial-cluster-token ${TOKEN} \
--initial-cluster-state ${CLUSTER_STATE}
Restart=always
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
......@@ -15,7 +15,7 @@ spec:
- name: port
value: "6443"
- name: vip_interface
value: wlp0s20f3
value: "enp8s0"
- name: vip_cidr
value: "32"
- name: cp_enable
......@@ -23,17 +23,17 @@ spec:
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_leaseduration
value: "5"
value: "10"
- name: vip_renewdeadline
value: "3"
value: "5"
- name: vip_retryperiod
value: "1"
value: "2"
- name: vip_address
value: 192.168.1.240
value: "192.168.111.240"
image: ghcr.io/kube-vip/kube-vip:v0.4.0
imagePullPolicy: IfNotPresent
name: kube-vip
......@@ -47,11 +47,11 @@ spec:
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostNetwork: true
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
......
# ROS-Kubernetes
Contains scrips and files to create a fielded ros kubernetes cluster.
[[_TOC_]]
Contains scrips and files to create a fielded ros kubernetes cluster.
# Table of Contents:
- [ROS-Kubernetes](#ros-kubernetes)
* [Image building](#image-building)
+ [NFS Setup](#nfs-setup)
* [Cluster Setup](#cluster-setup)
+ [Without Setup Script](#without-setup-script)
* [Highly Available Cluster Setup](#highly-available-cluster-setup)
+ [ETCD Cluster Setup](#etcd-cluster-setup)
+ [Virtual IP Load Balancer](#virtual-ip-load-balancer)
+ [Starting HA Cluster](#starting-ha-cluster)
* [ROS Commands](#ros-commands)
+ [Bash Shell in Pods](#bash-shell-in-pods)
+ [SSH Server Bash shell](#ssh-server-bash-shell)
* [Imaging ROS Test](#imaging-ros-test)
* [Handy Troubleshooting Commands](#handy-troubleshooting-commands)
* [Links With More Information](#links-with-more-information)
## Image building
......@@ -121,11 +137,37 @@ We can create an external etcd cluster using files from this repo with the follo
- Make sure etcd is installed on the machine, steps [here][etcdInstall], though we will be using a different `etcd.conf` file so stop after step 1. Confirm installation with `etcd --version` (expect a warning about unsupported ARM arch if using jetson, `export ETCD_UNSUPPORTED_ARCH=arm64` fixes it). This also installs the `etcdctl` tool which we'll need later.
- Create the file `/etc/systemd/system/etcd3.service`, making it a copy of the `etcd3.service` template in the [HA cluster][ha_folder] folder. Then we can remove any other `etcdx.service` files to reduce confusion. Do this for all nodes to be added to the cluster. No changes need to be made to the service file as we will be using an environment file to specify addresses.
- Create the file `/etc/systemd/system/etcd3.service`, making it a copy of the `etcd3.service` template in the [HA cluster][ha_folder] folder. Then we can remove any other `etcdx.service` files to reduce confusion, can also edit the existing `etcd.service` file. Do this for all nodes to be added to the cluster. No changes need to be made to the service file as we will be using an environment file to specify addresses.
- Create the environment, file specified in `etcd3.service`, as `/etc/etcd.env` with the contents of the template `etcd.env` file in the [HA cluster][ha_folder] folder. In this environment file we specify each host's ip address and hostname, and the current host ip address and hostname. Note, only the last two lines need to be changed per host, only lines with `THIS_*`, in this way each host can know the others' addresses. On arm architecture nodes, the line `ETCD_UNSUPPORTED_ARCH=arm64` should be added to the top of the environment file.
- Bringing up the etcd cluster can be tricky because each host needs to be able to detect the others during start up, otherwise errors get thrown, so we first need to stop the etcd service (`sudo systemctl stop etcd3.service` on each host prior) and reload the service definition on all hosts one after another:
`sudo rm -r /var/lib/etcdCluster/
sudo systemctl stop etcd3.service
sudo systemctl daemon-reload
sudo systemctl enable etcd3.service
sudo systemctl start etcd3.service` <!-- rm will be /etcdCluster btw -->
Note, if a cluster has already been created in this way, we need to remove the data directory of that cluster by `rm -r /var/lib/etcd/` whilst the `etcd3.service` is stopped (may require super user shell). Also `journalctl -xe` and `systemctl status etcd3.service` can be helpful for troubleshooting. Also disabling the existing etcd service can make for less work on reboot.
- If all is well we should be able to confirm all nodes are present in the cluster by first specifying the addresses in the command line then requesting a member list:
`export ENDPOINTS=<HOST_1>:2379,<HOST_2>:2379,<HOST_3>:2379
etcdctl --endpoints=$ENDPOINTS member list`
Where `HOST_x` is the ip address of each host, the same values as in `etcd.env`. If all members are accounted for, we can move on to the load balancer steps.
<!-- in our case it'll be: export ENDPOINTS=192.168.111.200:2379,192.168.111.202:2379,192.168.111.201:2379 -->
[etcdInstall][https://docs.portworx.com/reference/knowledge-base/etcd-quick-setup/]
-
### Virtual IP Load Balancer
In order to use multiple nodes as control planes, we need to create a single point of access for the kubernetes API server (there are about a million ways to do this, keepalived, google, aws, but we'll be using the fancy new [kube-vip][load-balancer] option). We'll be running a `kube-vip` pod on each node by placing the `kube-vip.yaml` file in the `/etc/kubernetes/manifests/` directory of each node (`kube-vip.yaml` template can be found in the [HA cluster][ha_folder] folder as per usual). In `kube-vip.yaml` there are some networking options, in our case we will likely need to change the interface option depending on the node's configuration. There are other options as well, the default values are usually fine, we should note the load-balancer ip address however. Note that the pod definition needs to be in the manifests directory before starting the cluster.
### Starting HA Cluster
To start the cluster, we can still use the cluster startup script however we now need to specify how to reach the etcd cluster and what nodes it uses. In `kubeadm-config.yaml` (in root directory) change the external etcd nodes' ip addresses to the value used in the etcd cluster setup step. We can now run
`./clusterStartup` on the first node (order doesn't matter, probably).
[load-balancer][https://kube-vip.io/control-plane/]
## ROS Commands
......
{"signing":{"default":{"expiry":"43800h","usages":["signing","key encipherment","server auth","client auth"]}}}
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAuGx1qhljUBUTOssAOLgEkvQjgRM8my7bIdVSbUZi2/NjxK5f
VZ/HuLK0N/t+LeqaQNSrXupQbr9pYDM04PZM/1Ed1F+43c0G/Oo7+xMs5k6fp8jk
hOkZK5oi49ZtZsL3RSNxpcOqDzdRvkMLCnz6xkgHWPnV6s/uk9YK/GluM0zg0D/A
OWo9opqCMwSZ+SP7GYPk6lvZewkkAs0SkbHBHo6IsuE+PmCI+QTfR6dtK3cPygvu
a9svc9oyytAjk8ul8yU34X/xkLk7Y2weM+6pyhlNsDbFmdIlugIYK0UTEQGAzGr7
Q7tW9ojpIvn4kBn08FWZSI3JhB8IQRoEnlVUtQIDAQABAoIBAQCCHwRmH8OSnUwU
D2b4nd2cUeU7DPeWBllWZczijObziaa6/s0E/NdN2ciON0Ov4fc0Btli/rABc8xF
s9t7Xky1V+ZUEbW9yQtFJ39qhv0HAjJjj7qsjErWGMrFNmW6O5V7kqZ87rDuS3nB
ZExF+ih1/hwxCxWDt3H9nOfjb//w8Ph2tzJ56UDhXK+iKu++JYRoQyf8/eL2gadq
pwMXpk+FHwILfjZ1Fh49wWkxtoZDagCAFdFOwdLUB4zXgM4deSYsw+zjly3po+ya
9SHY5nZJZufhsSGBV6GeBBH3zh90NgCbcLuUEriupFpH1mcEKnAEKOvIYYkXewcc
CSp+XeQBAoGBAPQ9DdDsYt3WWW1+RgZNdoUVAJwZ23N3S9FcpeQlY/XX5sSsk5+0
rtn50bE9pRl0Hq92e9M05yBM1Bl9f9CbfH2ElhLJnZskg/5vk2jnwqLdRV4EzAP6
OXglHUQ1M7E19V2utNS6EOeo7Ib3mGICNWSAzBAja57oMbNINC++LXm9AoGBAMFO
A9JUI5JoWRS2XnSd6xl2YvNr1zVb8ON/+7NNbO9amCK/nSeODbN+OjpVqLomVg0u
aZDTSDPhm3EuDr4psoamTzls5lC/ut9qtoiTKw6h51QSvdPyMTLdf8G8fVKbU8XU
+pXE76J+4OVhJXKezcaRD2HldVYNdywY38WZESpZAoGADkE1+jihuJLXG1XgXmPN
BA1qwLGdpkqTKUAACqXIBMQ6GsZ7wzl3bw9ulqqjZS3q0JDYv0X6K19wjaBOgm1g
wa6oV6ZexXxHG+WFM/061eiWMNuU0LKdAg8geyejwbcFgBc/RJ8rd2nbjDENOsMo
PJprzpFSqa6hn/YZ3aN64f0CgYEAiWztSpqGr49/xTnh7QZYHcIMlwIT/dtfZl2W
k+J3j7LYddvD3lsfYnxa6R381lpq0vQsGMocisXZvJ0B3i/Gu/OAX1MMalvkfvFe
07nM4po32413ZzbHw2G1cgaPEitbY0oG3HMl6mBJgsmN1e8QXBrE1NRMluD72F3W
uKQZkAECgYEAzpV1KXK64xT9Pah5O7UlJ6T1xaR4bL1Wf3hSS1PAAgl/kbE3MMMX
yLaetlyUB4pgHrIJyFMgJc1pEzAr9+zAv0BkKDD4VlcPfy9/IE9AduaMiBWLYI1F
Fo2eoFqZo2xNmZAxJ26a5hOfU5mkyuVsnGMYK9PxO4CDNw/UCTfL9eI=
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE REQUEST-----
MIICUjCCAToCAQAwDTELMAkGA1UEAxMCQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IB
DwAwggEKAoIBAQC4bHWqGWNQFRM6ywA4uASS9COBEzybLtsh1VJtRmLb82PErl9V
n8e4srQ3+34t6ppA1Kte6lBuv2lgMzTg9kz/UR3UX7jdzQb86jv7EyzmTp+nyOSE
6RkrmiLj1m1mwvdFI3Glw6oPN1G+QwsKfPrGSAdY+dXqz+6T1gr8aW4zTODQP8A5
aj2imoIzBJn5I/sZg+TqW9l7CSQCzRKRscEejoiy4T4+YIj5BN9Hp20rdw/KC+5r
2y9z2jLK0COTy6XzJTfhf/GQuTtjbB4z7qnKGU2wNsWZ0iW6AhgrRRMRAYDMavtD
u1b2iOki+fiQGfTwVZlIjcmEHwhBGgSeVVS1AgMBAAGgADANBgkqhkiG9w0BAQsF
AAOCAQEAlQI2Njydp26VReJ78/p+6I4mh4XNZ/1vzlVJNQepBYm9pyzB49RkFkEy
kDybNm2syBZ8MbO2DlKRNG2YxY0AbDrWsU2OC0kBngQR9r+9976UkeXFP3dhMP2X
un433Zi40mayCh73ChGkxA626WSktGOEPEfQWKu6mphBB6MtSSQrngzC2FgyS+8n
doQOx3IFpKzf6Na8i9onvLNON5APohF5+/3dCCzKKy3OgW/9gF0wfysDdYqFXN60
L6g4zaoYdnGR1Fed0hQZ3azCRziNqmhb7hulYyZqarbBxKu61tyRgwU25UsN0KW3
dXuqJcJIZ443V9WtBQqz0UNS2RfVIg==
-----END CERTIFICATE REQUEST-----
-----BEGIN CERTIFICATE-----
MIIDDjCCAfagAwIBAgIURlF4rWbJFncaq0HxiGoWxpOMkNAwDQYJKoZIhvcNAQEL
BQAwDTELMAkGA1UEAxMCQ0EwHhcNMjIwMzAzMjEwMzAwWhcNMjcwMzAyMjEwMzAw
WjANMQswCQYDVQQDEwJDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALhsdaoZY1AVEzrLADi4BJL0I4ETPJsu2yHVUm1GYtvzY8SuX1Wfx7iytDf7fi3q
mkDUq17qUG6/aWAzNOD2TP9RHdRfuN3NBvzqO/sTLOZOn6fI5ITpGSuaIuPWbWbC
90UjcaXDqg83Ub5DCwp8+sZIB1j51erP7pPWCvxpbjNM4NA/wDlqPaKagjMEmfkj
+xmD5Opb2XsJJALNEpGxwR6OiLLhPj5giPkE30enbSt3D8oL7mvbL3PaMsrQI5PL
pfMlN+F/8ZC5O2NsHjPuqcoZTbA2xZnSJboCGCtFExEBgMxq+0O7VvaI6SL5+JAZ
9PBVmUiNyYQfCEEaBJ5VVLUCAwEAAaNmMGQwDgYDVR0PAQH/BAQDAgEGMBIGA1Ud
EwEB/wQIMAYBAf8CAQIwHQYDVR0OBBYEFNXEWPWoSBHxHvvH5vvKtbBglbXgMB8G
A1UdIwQYMBaAFNXEWPWoSBHxHvvH5vvKtbBglbXgMA0GCSqGSIb3DQEBCwUAA4IB
AQBOFhXrDCjlOlKa+fMK+Y3s5/t448rv53nADlRMAqOkgdkPsTNoqpNSG+XaqBYC
kZMRCEmXB/B1EZJ+6z1c2kwzxdL89OGz6jCqqZE+SB34Ifd3c4SORkuEs+MK6ZVS
FXrdCvAFXzdlDzFXcHfzrh9kZvQ59VR88nqFYqik+kSfKQSpwYVpRrei3H6/JnY+
Jc2T5caGvIuVUnu0LxcvQplsKprJ2PlIUsbVkufdaMUxtXmUrvd4tKMx2oGwj4bb
QY2cXP5tl5X9TkTPFS5kL4szW43UbmW2vH8amFngga30ZgSjxy13dn/lGpwPFd5U
rrs3Ri8kAdOWN3CnfVgmVjAb
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAoIkXmSPlOgr5j8STXrXV3sNOs461w8VYFwN27RrcWQgbW7RG
OpQjMzKmBGOKp6D2KAuBmbheYKJgzAzPwbksqVtfeYwdmZmZ6INpFi2Z0HpaZXco
KP2HpyGSlCfV3nK2aWh9Qf49qV1IgRv2j5H+TQCgePewKxavF6sSDE3yo0yFlHB/
GAa8cToTxA/Eo4/2w/QEwgi8px0+uMMP9x22bQnSlIl48AX68i5KfNqBz2jFZCwN
XGokq/XRwwbiJBwXfY5kKylAJaVOVLdkw7qbaVluVX12xnJvYs2XBPHMzDTGMbc6
bMQ44jsWDNNxef19Kyq99wS4IGy4eMEW9DPQbQIDAQABAoIBAHPBZpquH2Oy9jCM
dhcc1pJCEkW26am4asRtYxuk+q3YAqIlY15p4tfP9ZXEkP4+OkC1y0Wkg6j6bQG1
Wzk85M9Za4ahWdafwzK9TtEHIJoLH5N6UCL+bQo+Uwsxji1QWee3yFoFkUDGWrl+
CFON2eh8Qzn2n7jyKl7Oo7zrl8HXDtYmSg3kA9H8jBRmf5AV4rjWVSWcegop483i
0vOFkLgR3nOljOTYxlGwTMTspkniG3XFEw9mc0Lbc04LZnoDflV2YDgCsTaDwbgR
xnw24W89abyb4vn3AN1vRyaZ+xM9cMwrWcjQ9ZyxP755pRPowrBkWZTyZ8/3ZiXr
h/pvWqECgYEA1cf1/oiCxw3aEYr5lsnzuxG71197rAtaFJzOlDiqWHKqnSxU1FTm
INJ9Cwm06wGwgHZ5gjrkUPh1hfbsGJQ1Tgwipo9+T5nu4kG1BmOibEFNd79vtRW/
mFa4mkPADOHwWJoAva9bUxyDkbXhDBTQ3rRHweu/zBhnZa6epyFAiNMCgYEAwD03
oUIqtypAxayRzVfTWC0qMJ0t2i7JzmgFd63ZZ/Vq1NG2hnkHm9VJrWoU19G0bTDI
S3xlLcBWGCxSq16HRGOSBWKeXWfCFDThhqXMAg0jy0e19Imu7JsN/CjPcvYNc/98
YayUX96OUkkHF7PjeKrEl/f2R8QSCCHMjSzWeb8CgYAevbJR969WkFdbTnC1jjTO
Ia6xObm+86Lwc9wA1GUqctK15zoLjmnJLntsquipIoUO8/plD7LlMdU0fl1U63r+
zh/tc5TmPWxsfKZbVNh2WK2bGpwlngr/DPletX9YWuUE2KBipmSrft4shcrmwdeH
LsVizVO3NYsoxANsZQuyoQKBgE5673IK9CtQuZ65o9Bj8WkHDzlHgceX4FU+jDTe
qWnSfCmj79MYJ+4Ldgewzg+JkhIdnzeJ8jhqU/uMZLeHYMufpqZCK4rQaCAdspBo
sU+JE7rSbMsHRn1bk6sE8iPppXZcr+ekL/KvhgS7wYSAsPW7KYUs+sMznXTqb3qW
+nw7AoGBALhl0q0IAz0dYsPBVjiZm/G7h642OZ1iEmObcC96hACnXd3YGeIkZupg
Z23mRVdS9ULmtHqjosfLXxHXZWUulhXHRcDH2Gqtc2R6OybGUv/4mO2KiLEZEhSv
CvPJcsv/hhmNMf+JzX9H8eaSfgLYK+9KW3RhmMfXJZB2+ecSgYJH
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE REQUEST-----
MIICdDCCAVwCAQAwETEPMA0GA1UEAxMGbmVydWRhMIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAoIkXmSPlOgr5j8STXrXV3sNOs461w8VYFwN27RrcWQgb
W7RGOpQjMzKmBGOKp6D2KAuBmbheYKJgzAzPwbksqVtfeYwdmZmZ6INpFi2Z0Hpa
ZXcoKP2HpyGSlCfV3nK2aWh9Qf49qV1IgRv2j5H+TQCgePewKxavF6sSDE3yo0yF
lHB/GAa8cToTxA/Eo4/2w/QEwgi8px0+uMMP9x22bQnSlIl48AX68i5KfNqBz2jF
ZCwNXGokq/XRwwbiJBwXfY5kKylAJaVOVLdkw7qbaVluVX12xnJvYs2XBPHMzDTG
Mbc6bMQ44jsWDNNxef19Kyq99wS4IGy4eMEW9DPQbQIDAQABoB4wHAYJKoZIhvcN
AQkOMQ8wDTALBgNVHREEBDACggAwDQYJKoZIhvcNAQELBQADggEBABbCYf9b8sy0
rUr4ax3xk/BI2ajMaTzfKTxASasFCysKX24GtWw8ki25lM66F5JXJnvI5olU0Rn3
Dvbr9ibezZNxphaQoqxZFl36jTgXBBflOSagQiXxKh2tFoWibCrG+iEl0evwPwFX
Y6oq6h1q+95bpIt6Sl/tf3lgClu+P7tKQqjC7etSDHHr+uLL96w8SrS7DH2sMsvb
DcqHsYS7hy+ZuacnHhqkGHj2tmBVE+QQ3Z9QkQJzeuKxuYJrWuej6Q+Z0Z0/j86R
veUf9K7bgcPtpf7M8+2YqgF2S9rWvKWW3iCr1ZcXpAtlgfwslt9BKP3XLba+BvHo
QcgwVUfGQaQ=
-----END CERTIFICATE REQUEST-----
-----BEGIN CERTIFICATE-----
MIIDRjCCAi6gAwIBAgIUMqdb3p6sRokpcPYq22dQTbZ4v8EwDQYJKoZIhvcNAQEL
BQAwDTELMAkGA1UEAxMCQ0EwHhcNMjIwMzAzMjEwODAwWhcNMjcwMzAyMjEwODAw
WjARMQ8wDQYDVQQDEwZuZXJ1ZGEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQCgiReZI+U6CvmPxJNetdXew06zjrXDxVgXA3btGtxZCBtbtEY6lCMzMqYE
Y4qnoPYoC4GZuF5gomDMDM/BuSypW195jB2ZmZnog2kWLZnQelpldygo/YenIZKU
J9XecrZpaH1B/j2pXUiBG/aPkf5NAKB497ArFq8XqxIMTfKjTIWUcH8YBrxxOhPE
D8Sjj/bD9ATCCLynHT64ww/3HbZtCdKUiXjwBfryLkp82oHPaMVkLA1caiSr9dHD
BuIkHBd9jmQrKUAlpU5Ut2TDuptpWW5VfXbGcm9izZcE8czMNMYxtzpsxDjiOxYM
03F5/X0rKr33BLggbLh4wRb0M9BtAgMBAAGjgZkwgZYwDgYDVR0PAQH/BAQDAgWg
MB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0G
A1UdDgQWBBQitB6w5AexIId5tP1zwRO7dmK6uzAfBgNVHSMEGDAWgBTVxFj1qEgR
8R77x+b7yrWwYJW14DAXBgNVHREEEDAOggZuZXJ1ZGGHBMCob8kwDQYJKoZIhvcN
AQELBQADggEBAAz1gvIfXCK0qhfk25a9Q7CkBFoWgz+fH7U36I96Kg969MUKw4uK
eoJlzDfRZyruwDUT3Jvr6gv0qmMoDDea/HHIqAuH9TaONY3cq4ZT7mZ/X0KV5ucr
+U1CAGo/Fp35agAyQuc8MrNcyhSUxffkuO0RgTc3DwRWKNqiByYbe3hBFNwa7d7v
a7+PGJQWZhRvUYE9+j4Lkl4JqwxmsOo4QLRk6b0Ww4B2d4CyxIkzLGV4Fasx+2ru
axQlbsjT2KMXMTD2AOTpgFEyOsioNa0w70Ic3svf7srQKWWYHUpQmTyffDKcQFBS
EvXxWt2wr9tAqX7tlyqw+UKCQ9UjnayPgvs=
-----END CERTIFICATE-----
# grab new token tbd
TOKEN="token-01"
CLUSTER_STATE=new
NAME_1=pop-os
NAME_2=rossetti
NAME_3=neruda
HOST_1=192.168.111.200
HOST_2=192.168.111.202
HOST_3=192.168.111.201
CLUSTER=pop-os=http://192.168.111.200:2380,rossetti=http://192.168.111.202:2380,neruda=http://192.168.111.201:2380
THIS_NAME=pop-os
THIS_IP=192.168.111.200
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAwYBPc9XFvHVm3ivhltXqpdYk/u38thNk7rVa6QrnnjmU65OR
hgdQBDHZTwrflglIGGlvtlUbUKxwpcDLQEoeSNvaoD0UdyP3cD+QbtihNko3N2id
VHt3FRx3KJd+u4Cmw0VN5Y1jsC1BxVzUVTY2e4ZtzjeexnYVh6X06pekmz+iwHck
QgauEybsSuDZ1MLkq05IN7gl/2vXkyQRAd+jW5fgi2IhKPlHvXSEbJHB3D+cLuEF
/AGYjLtuQ3yEB4rQNLrxC7Mo5rzbEhFYXdpG3tar25YUHN6Q+UoTKhJ060bv016n
EdgWHCELDk3DDCcTEHgs8pAgxYiJqA00bWJNzQIDAQABAoIBAQC2tOnQjSQVK6GC
3Fo4qynRhp8OGzbH0Q43mwQJEaPbsbEcswzwSc1S+KKg0LqHF6J8cmnp9vhAt2Hy
EFWaEaIA157aOHIvgMVttocxMtkdwvvaFKyhjabGR0d4C3u9nd9YeALyYguj2UfG
DD3ta7AL+MWLttbzu6HeoGPecmsZrldpHyyTpZrsinywjNZIWG6Dr5h0C6aO0qm0
ZrqCBfT0UxP2+n2VWWdqfAD4kAZoPQecrY4pojCsDIgtBtWPMo8EavEVW0IK5c36
0D/eABzSHPpzLozfJGnyl+QQsJoVThk0Z/hbhpF5hrIIV+ku16R31MdH0KQjlzxp
c8oPR4RdAoGBAP6Uw2g+pMBO1ZI1uxyYUVb/G3tZ80AjFzM7S49OhZWl0n4x67G3
IHRbLct8autffi9DnR/hZQAggiKHwDLkNXJyUpJgh5xZ86U7LS6MKblvExAz+qvq
QHkVU8iqaN+pF9QpyLT46j+Wl7StjmuFkz2KwU4UjiXCUfFQFTM9JcQLAoGBAMKU
Ze5/V48lOpKgaHxeHIbbyzn3z5oU75gfTL9nkZvq1OrVA/zgRWbK/3LiZ5G1hXPP
+iEGrJZxvNA64IzeFH5VgqVRkOYpBxRJOD2I7b0ybh+ZQYKouQl8Skssu+tGXJI/
JiL/nHBb06oBtHFkHPLX3YWX0OhutPE+i9ZVT0SHAoGBAMV+cIz5M+QABEy1pB9N
xqQfqZkqsbtKvZ+/2yEkQBtUlZPSdE7ciq/ZYBBgx60q2oAeCtGFkE3l6i6RczfH
s/LB326wteNEIPBIlQdsMp4pHffO6yLcygFk2ydrk4oW6mZrt8k05LxJvgyrKzYc
XJo8fzNsXM3MDreOcbPbNuX7AoGAFYBkiyhTOqQ4hr9nDGcx680Z9fvmWvZ5S2wa
BQSi2IHoqVKEsErwIF3KQJ24KCfQ6W1QDJo6NB25aaE38xkPVq6IU1BiHKzn9Vvp
9RLOgpuyA5fh31hZqiyr4Qa+dU8/J4IG6tMoLFpZV238zJLtiABGoF6YXTbuk8H/
nd2rdr0CgYAzxFvtb55xazVZEwsWe2QEV7KsU67Gd2OMtOb6MW2cdmOX2RVZMX+g
79+ktsLAMogHTaBXe+u3kZH/11m8bAF/vf745Tv8VByl+40AZHIBzJm64Y3+Xh0Q
uPmDJ1T5IyFUQKgHo3nxs57It+pNtdPy+QPz69y3MoYLgOoYuFNnFg==
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE REQUEST-----
MIICdDCCAVwCAQAwETEPMA0GA1UEAxMGcG9wLW9zMIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAwYBPc9XFvHVm3ivhltXqpdYk/u38thNk7rVa6QrnnjmU
65ORhgdQBDHZTwrflglIGGlvtlUbUKxwpcDLQEoeSNvaoD0UdyP3cD+QbtihNko3
N2idVHt3FRx3KJd+u4Cmw0VN5Y1jsC1BxVzUVTY2e4ZtzjeexnYVh6X06pekmz+i
wHckQgauEybsSuDZ1MLkq05IN7gl/2vXkyQRAd+jW5fgi2IhKPlHvXSEbJHB3D+c
LuEF/AGYjLtuQ3yEB4rQNLrxC7Mo5rzbEhFYXdpG3tar25YUHN6Q+UoTKhJ060bv
016nEdgWHCELDk3DDCcTEHgs8pAgxYiJqA00bWJNzQIDAQABoB4wHAYJKoZIhvcN
AQkOMQ8wDTALBgNVHREEBDACggAwDQYJKoZIhvcNAQELBQADggEBAKwdN4Pdj8KA
wFIx11kSEv9zmUDJd8f6rrT/amKvbvMsS/2RzUUvXWhLrsUBlukRQvLw7b7VuGli
tQV/Lk0hlIXJkZr0om9Q5Lo72G2z3vLwZx9GqVTsSoMLmgUPS5201dxA9lVxbHyN
nnV7WRFAQtNHdRxvTAsEI5LB4SdTjFfMBd55ePImFLfg5VsojEnezcOP3e+9ekYp
4vkTlDFcipozisPqk3SLht/h8iNjcB+7vm63LHKEbB2w6ojTL4vFp/zbwwBcDJVG
WCbh/BWMt3EVtpQxBpKTZPySGGUgl2OIDs+w05EH+w3imryZPyCelOb2f2Mdh1ds
Pnjivg+AFmg=
-----END CERTIFICATE REQUEST-----
-----BEGIN CERTIFICATE-----
MIIDRjCCAi6gAwIBAgIUFg7kx0hVQudYalU/LxqWTEiyIlowDQYJKoZIhvcNAQEL
BQAwDTELMAkGA1UEAxMCQ0EwHhcNMjIwMzAzMjEwNzAwWhcNMjcwMzAyMjEwNzAw
WjARMQ8wDQYDVQQDEwZwb3Atb3MwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDBgE9z1cW8dWbeK+GW1eql1iT+7fy2E2TutVrpCueeOZTrk5GGB1AEMdlP
Ct+WCUgYaW+2VRtQrHClwMtASh5I29qgPRR3I/dwP5Bu2KE2Sjc3aJ1Ue3cVHHco
l367gKbDRU3ljWOwLUHFXNRVNjZ7hm3ON57GdhWHpfTql6SbP6LAdyRCBq4TJuxK
4NnUwuSrTkg3uCX/a9eTJBEB36Nbl+CLYiEo+Ue9dIRskcHcP5wu4QX8AZiMu25D
fIQHitA0uvELsyjmvNsSEVhd2kbe1qvblhQc3pD5ShMqEnTrRu/TXqcR2BYcIQsO
TcMMJxMQeCzykCDFiImoDTRtYk3NAgMBAAGjgZkwgZYwDgYDVR0PAQH/BAQDAgWg
MB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0G
A1UdDgQWBBTCyLZKPXbxkxkbRBrfv6Jpa8OZMDAfBgNVHSMEGDAWgBTVxFj1qEgR
8R77x+b7yrWwYJW14DAXBgNVHREEEDAOggZwb3Atb3OHBMCob8gwDQYJKoZIhvcN
AQELBQADggEBAGAoLtCr+vrQoNhP7Dtt16YANlA5ZYSsKhFH/8PDOTLtK9TDfUPv
814tORLzlJ0h1tcXgC3PJ9XAVsM/6tokanZaet2QSZ4izVHhA0ILjXZ8PtktC/nH
oSYBi/kLm1s2JJsSI8o/BuRadcYcKx05P8QGFg3l8/TFvm+JPhRogQzIZUiobvy8
t4JM5btK/TxZPnPCa04FE20/D52W7LE5nv3KZNz7rfu5eaH0gwtsl2bFJiPdGrqz
xOf3pzvXZ5KYSaL/KuFVZSko91ur+ROwntnzyaKr23Fk9DARQLzpLE2jedYnQjC0
ZkxV9nk0Imt8ckYT6gK6VD+4JO/sfftpzv0=
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA2Twh6ws2pCo0FxbA0r7SDtx7/kvL/9nb6OYp/Zl2QJAhskgv
XCnJposSTjp9UXgwDS9Fq17ywxuYyHiBnCjeEKwfHK+cFoyaWn6GBVMh99VY/flM
GymrdL22+Vj03Ry28TIrgQ0AU8NFlKr+QA/5P/CnF1wPNQcMnTktIfBy3E6sviEv
d7+0/YlMJHng3rUlVDrF8sX+1BHMD9WMnABK6cXXSGrZTOhQ8z0w4O7shf237mZG
wBuAwC0WB1plM/kUjT0JcdZ7882Q6775V0ACTDSHWzCgsoeg3POLkvUnr4FMtby0
hSbYherXu9zY//crIuMxp+CGvLyQQ69rJ5JkTwIDAQABAoIBAAmUV0KQKgavPcDO
5g3lEEpLesRJ/2L27nWkwLFINSi/sly0RjJgPV40v8fnWGNhU20haocWFsp3yxL8
DWsfejtt+6k+LTnpVV0sOyi888CEDfqVJcAE5GSvgQQZ4iJmA8M7HSQBuMP41nap
27Bjg/BH4nZrPthtySadwNnASrBgSv1Kaj/K0w5ARkAPA/csMJ9ESBcei7rOtxFS
VEwbSXwQjUQEUUTd6dA1qvQ+xVXD8tCKcpYC+O0RmDNQaeEbnAkmjrB3Y6a7EL5f
VRfuDzXu8dqpt81gvg1gf4QEwC5Dt2QwPVQXEXHS3ZQVhIHGcEqcFt4h6iGCf17i
yzuiRwECgYEA6mptby1SYUvmFXp9R3em+7UlzVqf3VIr5pBtJD7FLTX5g83VC5ju
gs1135eCzgLmYBJQSMVzbGrsZa8cgVlq7LY1/zhjzp2J8ykCPelvUxv9IKGbYjhI
PTLLv0MuIAf70OuU+qKkvaahzj0TSvX2D/9SXCr9rtd8kQ9J2okb+48CgYEA7Ty5
SLAF90wBa23RA9xSviG/dV6xINsmClghk42iFW7g76+X/iQD1+FwlPh7vyas/obC
RYMcZGgpuPH261ZClUJV5DiOIHtnoHfKa8Volr2UE35Fesr8S49a60TyasaIs/Vq
Yc5Gy/SxywzGUEWf1nvvqYgnvHx9v6hsdrchq0ECgYAaf00/c/AL73hilSX0HiJR
8XgEbmoDqnYr6cdsgWvoYGGD9JBQb0kGoBLi40112/4OfgN1NlyFtNBj7hdax7C+
cRpJbyZZBJXDVq9aMDjVPCSwu5PE1nfT8xn01LMyC7T7OKXubtQQW/WOSnkT0Bmw
VTwKDxH94X3DJ+dBPJ5dIQKBgC7ySlQ5CSUz9D/3HlqeOf2IHiQy1eiDlZaMdDCH
4aBOLdMgs3pGVEBfS3EfbxWXqLpBnqY02OSBvGft8ggGLOzukKK2EmIZKZuWuQb0
rMrPv0LQRR2Ul7K4LqzKGxLIMPszwJaURGxOAvUElSYDcSr3oaix2fMxy4ym3rfr
a41BAoGAXyNn9ydw3nqFB22oOVVxzQx6nkrNUwVkqrbOEUlhQU1FDYkxdQ3T7hiX
HToinEUNLzgMDko2QmfMgetPtzJ1BN58y6T1eMHnYVEDU3Fq3SdUXJBpeJ8nyxHl
wPFvjVlIysYgvNDvPLgWIjNn91xQAEnivhbeq9oQ9hxMPQMTueA=
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE REQUEST-----
MIICdjCCAV4CAQAwEzERMA8GA1UEAxMIcm9zc2V0dGkwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQDZPCHrCzakKjQXFsDSvtIO3Hv+S8v/2dvo5in9mXZA
kCGySC9cKcmmixJOOn1ReDANL0WrXvLDG5jIeIGcKN4QrB8cr5wWjJpafoYFUyH3
1Vj9+UwbKat0vbb5WPTdHLbxMiuBDQBTw0WUqv5AD/k/8KcXXA81BwydOS0h8HLc
Tqy+IS93v7T9iUwkeeDetSVUOsXyxf7UEcwP1YycAErpxddIatlM6FDzPTDg7uyF
/bfuZkbAG4DALRYHWmUz+RSNPQlx1nvzzZDrvvlXQAJMNIdbMKCyh6Dc84uS9Sev
gUy1vLSFJtiF6te73Nj/9ysi4zGn4Ia8vJBDr2snkmRPAgMBAAGgHjAcBgkqhkiG
9w0BCQ4xDzANMAsGA1UdEQQEMAKCADANBgkqhkiG9w0BAQsFAAOCAQEAjdHOgVES
l7YIJdeFlpa0I7PphLLCX/Eo5qk0d4FFhK0Ia39VaOFfSVuNaP9lyfR2c6qs+zsk
b1216objZovtH36PpZ2fvZ+GbKNg6l8Ds6lvAFo53NIRJeQ9xnkFB9a57ynYiJFt
rKdoZu1Dg01QvzYPYeAlCDavISqkCAiEDD+xL14mxCM5u2fubdwvGPDKjdd3xwOr
W4yiyhXoOVqCSrI+/jCEfXA9/9X6rKaYnGorbzC2kWeJrspuq46Yfj0gvKidhs0P
Cb5lPber+i147KfJ1bUGG60K5aAYCbgRctEwwqyh0e2Ki89EScO3im3zO83JeXWZ
nwsEen6eRrUfpA==
-----END CERTIFICATE REQUEST-----
-----BEGIN CERTIFICATE-----
MIIDSjCCAjKgAwIBAgIUULY1jIxW9JoaJrXOdzqcK2d1QcswDQYJKoZIhvcNAQEL
BQAwDTELMAkGA1UEAxMCQ0EwHhcNMjIwMzAzMjEwODAwWhcNMjcwMzAyMjEwODAw
WjATMREwDwYDVQQDEwhyb3NzZXR0aTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC
AQoCggEBANk8IesLNqQqNBcWwNK+0g7ce/5Ly//Z2+jmKf2ZdkCQIbJIL1wpyaaL
Ek46fVF4MA0vRate8sMbmMh4gZwo3hCsHxyvnBaMmlp+hgVTIffVWP35TBspq3S9
tvlY9N0ctvEyK4ENAFPDRZSq/kAP+T/wpxdcDzUHDJ05LSHwctxOrL4hL3e/tP2J
TCR54N61JVQ6xfLF/tQRzA/VjJwASunF10hq2UzoUPM9MODu7IX9t+5mRsAbgMAt
FgdaZTP5FI09CXHWe/PNkOu++VdAAkw0h1swoLKHoNzzi5L1J6+BTLW8tIUm2IXq
17vc2P/3KyLjMafghry8kEOvayeSZE8CAwEAAaOBmzCBmDAOBgNVHQ8BAf8EBAMC
BaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAw
HQYDVR0OBBYEFPpFXmYmXBtaq7xslb+ToYMAUOheMB8GA1UdIwQYMBaAFNXEWPWo
SBHxHvvH5vvKtbBglbXgMBkGA1UdEQQSMBCCCHJvc3NldHRphwTAqG/KMA0GCSqG
SIb3DQEBCwUAA4IBAQBjUSr6vx57M3AFPSmTjuf++RnbX+U26nMFTKeZFoiakDYJ
EnjvoFG6xhA0IBhxpkrpyZfApiAgyMaIoea/pc1fZAYgSuyXzfK/HgLJFntI7HxV
XIrvZFoPjB/x1niGe0DSpww8mdYngd95v5iaQiuA4joRDMFIagbsbxKiCBoZE/Rv
E2+ucd0So2ZnG5yN71W4NcOdTO1V0y935w61y7qfPwKVhALPUuvEqEC3ad4jaZWY
/4rsx72ZUrCOuPtncg2Q4gPjwiqTsjOCY8mopduBWzcX4OTJujI42zmzcjjUBFoW
zOWJc40pTOlUWm8fX94HqkOhhzNrLQddg8zu7Ivb
-----END CERTIFICATE-----
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
ETCD_NAME=pop-os
ETCD_LISTEN_PEER_URLS="https://192.168.111.200:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.111.200:2379"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER="pop-os=https://192.168.111.200:2380,neruda=https://192.168.111.201:2380,rossetti=https://192.168.111.202:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.111.200:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.111.200:2379"
ETCD_TRUSTED_CA_FILE="/etc/etcd/etcd-ca.crt"
ETCD_CERT_FILE="/etc/etcd/server.crt"
ETCD_KEY_FILE="/etc/etcd/server.key"
ETCD_PEER_CLIENT_CERT_AUTH=true
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/etcd-ca.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/server.key"
ETCD_PEER_CERT_FILE="/etc/etcd/server.crt"
ETCD_DATA_DIR="/var/lib/etcd"
......@@ -4,7 +4,18 @@
# script that starts up kubernetes cluster, and does some basic qol
sudo swapoff -a # turn off swap memory
sudo kubeadm init --config kubeadm-config.yaml --upload-certs
# make cert key for join command later (gets used in init automathically)
VIP="192.168.111.10"
# put load balancer pod spec in the manifests directory
sudo docker run --network host --rm ghcr.io/kube-vip/kube-vip:main manifest pod \
--vip $VIP \
--arp \
--controlplane \
--leaderElection | sudo tee /etc/kubernetes/manifests/vip.yaml
sudo kubeadm init -v 6 --config kubeadm-config.yaml --upload-certs
if [[ $? -ne 0 ]] ; then
# some error
......@@ -14,23 +25,11 @@ fi
mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# export KUBECONFIG=/etc/kubernetes/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#
kubectl apply -f flannel.yml
kubectl taint nodes --all node-role.kubernetes.io/master-
# kubectl label nodes $HOSTNAME name=base
# print join command again to clipboard
# kubeadm token create --print-join-command > ./tmpJoinCommand.txt
# now add config command to file
kubeadm token create --print-join-command | xclip -selection clipboard
# save that file to clipboard
# cat ./tmpJoinCommand.txt | xclip -selection clipboard
# rm ./tmpJoinCommand.txt
echo "set up done, join command copied"
exit 0
# dang pod cidr decided not to be assigned correctly, so follow links to assign it manually:
# https://stackoverflow.com/questions/52633215/kubernetes-worker-nodes-not-automatically-being-assigned-podcidr-on-kubeadm-join
# print join command again to clipboard and add bling (COPY NOT WORKING)
# echo "$(kubeadm token create --print-join-command)--control-plane --certificate-key $KEEY" | xclip -sel clip
exit 0
......@@ -2,17 +2,25 @@
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.4
controlPlaneEndpoint: "192.168.1.240:6443"
controlPlaneEndpoint: "192.168.111.10" # load balancer ip, in cluster_setup script
networking:
podSubnet: "10.244.0.0/16"
etcd:
# can be external or local
external:
endpoints:
- http://192.168.1.230:2379
- http://192.168.1.24:2379
- http://192.168.1.191:2379
# certs?!
apiServerCertSANs:
- 192.168.111.10
api:
advertiseAddress: 192.168.111.10
bindPort: 6443
# etcd:
# # can be external or local
# external:
# endpoints:
# - https://192.168.111.200:4679
# - https://192.168.111.202:4679
# - https://192.168.111.201:4679
# caFile: /etc/etcd/etcd-ca.crt
# certFile: /etc/etcd/server.crt
# keyFile: /etc/etcd/server.key
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
......
# kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.4
networking:
podSubnet: "10.244.0.0/16"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.4
etcd:
external:
endpoints:
- https://192.168.111.200:4679
- https://192.168.111.202:4679
- https://192.168.111.201:4679
caFile: /etc/etcd/etcd-ca.crt
certFile: /etc/etcd/server.crt
keyFile: /etc/etcd/server.key
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment