Migrasi node master kubernetes dari 1 server ke server lain
Saya memiliki server bare metal yang menampung node master Kubernetes. Saya perlu memindahkan node master ke server logam kosong baru. Bagaimana kita bisa memindahkan atau memindahkannya?
Saya telah melakukan penelitian saya tetapi kebanyakan dari mereka terkait dengan cluster GCP di mana kami memindahkan 4 direktori dari node lama ke node baru, dan juga mengubah IP dan pertanyaan itu diajukan 5 tahun yang lalu yang sudah usang sekarang.
/var/etcd
/srv/kubernetes
/srv/sshproxy
/srv/salt-overlay
Whats the proper way to move it assuming we are using the most recent k8s version of 1.17
Jawaban
Following github issue mentioned in the comments and IP address changes in Kubernetes Master Node:
1. Verify your etcd data directory looking into etcd pod in kube-system namespace:
(default values using k8s v1.17.0 created with kubeadm),
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
2. Preparation:
- copy
/etc/kubernetes/pkifrom Master1 to the new Master2:
#create backup directory in Master2,
mkdir ~/backup
#copy from Master1 all key,crt files into the Master2
sudo scp -r /etc/kubernetes/pki [email protected]:~/backup
- On Master2 remove certs with keys that have the old IP address for apiserver and etcd cert:
./etcd/peer.crt
./apiserver.crt
rm ~/backup/pki/{apiserver.*,etcd/peer.*}
- move
pki directory to /etc/kubernetes
cp -r ~/backup/pki /etc/kubernetes/
3. On Master1 create etcd snapshot:
Verify your API version:
kubectl exec -it etcd-Master1 -n kube-system -- etcdctl version
etcdctl version: 3.4.3
API version: 3.4
- using current etcd pod:
kubectl exec -it etcd-master1 -n kube-system -- etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
- using or using etcdctl binaries:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
4. Copy created snapshot from Master1 to Master2 backup directory:
scp ./snapshot1.db [email protected]:~/backup
5. Prepare Kubeadm config in order to reflect Master1 configuration:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: x.x.x.x
bindPort: 6443
nodeRegistration:
name: master2
taints: [] # Removing all taints from Master2 node.
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: 10.0.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
6. Restore snapshot:
- using
etcd:3.4.3-0docker image:
docker run --rm \
-v $(pwd):/backup \
-v /var/lib/etcd:/var/lib/etcd \
--env ETCDCTL_API=3 \
k8s.gcr.io/etcd:3.4.3-0 \
/bin/sh -c "etcdctl snapshot restore './snapshot1.db' ; mv /default.etcd/member/ /var/lib/etcd/"
- or using
etcdctlbinaries:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot restore './snapshot1.db' ; mv ./default.etcd/member/ /var/lib/etcd/
7. Initialize Master2:
sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd --config kubeadm-config.yaml
# kubeadm-config.yaml prepared in 5 step.
- notice:
[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 master2_IP]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
.
.
.
Your Kubernetes control-plane has initialized successfully!
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- After k8s object verification (short example):
kubectl get nodes
kuebctl get pods - owide
kuebctl get pods -n kube-system -o wide
systemctl status kubelet
- If all deployed k8s objects like pods,deployments etc, were moved into your new Master2 node:
kubectl drain Master1
kubectl delete node Master1
Note:
In addition please consider Creating Highly Available clusters in this setup you should have possibility to more than 1 master, in this configuration you can create/remove additional control plane nodes in more safely way.
- Docs
- Tutorial
- Documentation Operating etcd clusters for Kubernetes