# /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log /dev/log local0 log /dev/log local1 notice daemon
#--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 1 timeout http-request 10s timeout queue 20s timeout connect 5s timeout client 20s timeout server 20s timeout http-keep-alive 10s timeout check 10s
#--------------------------------------------------------------------- # apiserver frontend which proxys to the control plane nodes #--------------------------------------------------------------------- frontend apiserver bind *:8443 mode tcp option tcplog default_backend apiserver
#--------------------------------------------------------------------- # round robin balancing for apiserver #--------------------------------------------------------------------- backend apiserver option httpchk GET /healthz http-check expect status 200 mode tcp option ssl-hello-chk balance roundrobin server master01 183.131.145.85:6443 check server master02 183.131.145.84:6443 check server master03 61.153.100.147:6443 check # [...]
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
This node has joined the cluster andanew control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed ofthenew secure connection details. * Control plane (master) label and taint were applied tothenew node. * The Kubernetes control plane instances scaled up. * A new etcd member was added tothelocal/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following asa regular user:
Run 'kubectl get nodes'to see this node join the cluster.
刚才加入集群的 master 节点,不要忘记将 kube-vip 的静态 pod 进行创建,这样才能确保集群拥有了 HA 属性 。执行上面的 kube-vip 部署步骤
其他节点部署kube-vip static pod
1 2 3 4 5 6 7
sudo docker run --network host --rm plndr/kube-vip:v0.3.7 manifest pod \ --interface bond0.101 \ --vip 183.131.145.82 \ --controlplane \ --services \ --arp \ --leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml
ERROR1
1 2 3 4 5 6 7
Oct2110:44:01 master01-183-131-145-85 kubelet[23679]: E102110:44:01.66437423679 kubelet.go:2292] node "master01-183-131-145-85" not found Oct2110:44:01 master01-183-131-145-85 kubelet[23679]: E102110:44:01.76448123679 kubelet.go:2292] node "master01-183-131-145-85" not found Oct2110:44:01 master01-183-131-145-85 kubelet[23679]: E102110:44:01.86458223679 kubelet.go:2292] node "master01-183-131-145-85" not found Oct2110:44:01 master01-183-131-145-85 kubelet[23679]: E102110:44:01.96468523679 kubelet.go:2292] node "master01-183-131-145-85" not found Oct2110:44:02 master01-183-131-145-85 kubelet[23679]: E102110:44:02.06479623679 kubelet.go:2292] node "master01-183-131-145-85" not found Oct2110:44:02 master01-183-131-145-85 kubelet[23679]: E102110:44:02.16490723679 kubelet.go:2292] node "master01-183-131-145-85" not found Oct2110:44:02 master01-183-131-145-85 kubelet[23679]: E102110:44:02.26503423679 kubelet.go:2292] node "master01-183-131-145-85" not found
ERROR2
1
Unable toconnectto the server: x509: certificate signed byunknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")