1. 环境配置
生产环境部署高可用Kubernetes环境时至少配置三台Master节点,如有更高要求可以再增加,但Master节点数量应为奇数
+-------------+---------------+---------------------------+ | Hostname | IP Address | Role | +-------------+---------------+---------------------------+ | k8s | 192.168.1.100 | VIP | +-------------+---------------+---------------------------+ | k8s-master1 | 192.168.1.101 | Master,Keepalived,HAProxy | +-------------+---------------+---------------------------+ | k8s-master2 | 192.168.1.102 | Master,Keepalived,HAProxy | +-------------+---------------+---------------------------+ | k8s-master3 | 192.168.1.103 | Master,Keepalived,HAProxy | +-------------+---------------+---------------------------+ | k8s-worker1 | 192.168.1.104 | Worker | +-------------+---------------+---------------------------+ | k8s-worker2 | 192.168.1.105 | Worker | +-------------+---------------+---------------------------+ | k8s-worker3 | 192.168.1.106 | Worker | +-------------+---------------+---------------------------+
在所有节点加入hosts信息,如有DNS记录则不用
cat <<EOF >> /etc/hosts 192.168.1.100 k8s k8s.test.local 192.168.1.101 k8s-master1 k8s-master1.test.local 192.168.1.102 k8s-master2 k8s-master2.test.local 192.168.1.103 k8s-master3 k8s-master3.test.local 192.168.1.104 k8s-worker1 k8s-worker1.test.local 192.168.1.105 k8s-worker2 k8s-worker2.test.local 192.168.1.106 k8s-worker3 k8s-worker3.test.local EOF
安装Docker,步骤略可参考https://www.ebanban.com/?p=496
在所有Master节点上输入以下环境变量,主机名和IP信息根据自己的实际的情况进行修改
export KUBECONFIG=/etc/kubernetes/admin.conf export LOAD_BALANCER_DNS=k8s.test.local export LOAD_BALANCER_PORT=8443 export CP1_HOSTNAME=k8s-master1.test.local export CP2_HOSTNAME=k8s-master2.test.local export CP3_HOSTNAME=k8s-master3.test.local export VIP_IP=192.168.1.100 export CP1_IP=192.168.1.101 export CP2_IP=192.168.1.102 export CP3_IP=192.168.1.103
关闭防火墙、关闭swap、关闭SELinux、调整内核参数
sudo systemctl stop firewalld sudo systemctl disable firewalld sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab sudo setenforce 0 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
从mirrorgooglecontainers源下载kubernetes镜像
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1 docker pull mirrorgooglecontainers/pause-amd64:3.1 docker pull mirrorgooglecontainers/etcd-amd64:3.2.18 docker pull coredns/coredns:1.1.3 docker pull quay.io/coreos/flannel:v0.10.0-amd64
将镜像标记为k8s.gcr.io的名称
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1 docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1 docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18 docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
删除无用镜像名
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh docker rmi coredns/coredns:1.1.3
安装、配置kubelet
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1 systemctl enable kubelet
2. 准备SSH Keys
生成SSH Key(通常在第一台Master上操作,可以在终端上操作)
ssh-keygen -t rsa -b 2048 -f /root/.ssh/id_rsa -N ""
将SSH Key复制给其他主机
for host in {$CP1_HOSTNAME,$CP2_HOSTNAME,$CP3_HOSTNAME}; do ssh-copy-id $host; done
3. 部署keepalived(Master节点)
keepalived用于生产浮动的虚拟IP,并将浮动IP分配给优先级最高且haproxy正常运行的节点
在第一台Master上配置和启动keepalived,若网卡名称不为示例中的eth0则改为对应名称
yum install -y keepalived curl psmisc && systemctl enable keepalived cat << EOF > /etc/keepalived/keepalived.conf vrrp_script haproxy-check { script "killall -0 haproxy" interval 2 weight 20 } vrrp_instance haproxy-vip { state BACKUP priority 102 interface eth0 virtual_router_id 51 advert_int 3 unicast_src_ip $CP1_IP unicast_peer { $CP2_IP $CP3_IP } virtual_ipaddress { $VIP_IP } track_script { haproxy-check weight 20 } } EOF systemctl start keepalived
在第二台Master上配置和启动keepalived
yum install -y keepalived curl psmisc && systemctl enable keepalived cat << EOF > /etc/keepalived/keepalived.conf vrrp_script haproxy-check { script "killall -0 haproxy" interval 2 weight 20 } vrrp_instance haproxy-vip { state BACKUP priority 101 interface eth0 virtual_router_id 51 advert_int 3 unicast_src_ip $CP2_IP unicast_peer { $CP1_IP $CP3_IP } virtual_ipaddress { $VIP_IP } track_script { haproxy-check weight 20 } } EOF systemctl start keepalived
在第三台主机上配置和启动keepalived
yum install -y keepalived curl psmisc && systemctl enable keepalived cat << EOF > /etc/keepalived/keepalived.conf vrrp_script haproxy-check { script "killall -0 haproxy" interval 2 weight 20 } vrrp_instance haproxy-vip { state BACKUP priority 100 interface eth0 virtual_router_id 51 advert_int 3 unicast_src_ip $CP3_IP unicast_peer { $CP1_IP $CP2_IP } virtual_ipaddress { $VIP_IP } track_script { haproxy-check weight 20 } } EOF systemctl start keepalived
4. 部署HAProxy(Master节点)
HAProxy用于检测集群内api-server的健康状况并进行负载均衡
在三台Master节点上执行以下命令安装和启用HAProxy
yum install -y haproxy && systemctl enable haproxy cat << EOF > /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice tune.ssl.default-dh-param 2048 defaults log global mode http option dontlognull timeout connect 5000ms timeout client 600000ms timeout server 600000ms listen stats bind :9090 mode http balance stats uri /haproxy_stats stats auth admin:admin stats admin if TRUE frontend kube-apiserver-https mode tcp bind :8443 default_backend kube-apiserver-backend backend kube-apiserver-backend mode tcp balance roundrobin stick-table type ip size 200k expire 30m stick on src server k8s-master1 192.168.1.101:6443 check server k8s-master2 192.168.1.102:6443 check server k8s-master3 192.168.1.103:6443 check EOF systemctl start haproxy
5. 初始化k8s集群(第一台)
在第一台Master上执行以下命令,10.244.0.0/16是flannel的CIDR地址,如果用其他CNI需要改成对应的CIDR。
cat << EOF > ~/kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.1 apiServerCertSANs: - "$LOAD_BALANCER_DNS" api: controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT" etcd: local: extraArgs: listen-client-urls: "https://127.0.0.1:2379,https://$CP1_IP:2379" advertise-client-urls: "https://$CP1_IP:2379" listen-peer-urls: "https://$CP1_IP:2380" initial-advertise-peer-urls: "https://$CP1_IP:2380" initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380" name: $CP1_HOSTNAME serverCertSANs: - $CP1_HOSTNAME - $CP1_IP peerCertSANs: - $CP1_HOSTNAME - $CP1_IP networking: podSubnet: "10.244.0.0/16" EOF
初始化第一台master,初始化完成后记录生成的kubeadm join命令(包含token),用于后面worker节点加入时使用
kubeadm init --config ~/kubeadm-config.yaml
将相关证书文件复制到其他master节点
CONTROL_PLANE_HOSTS="$CP2_HOSTNAME $CP3_HOSTNAME" for host in $CONTROL_PLANE_HOSTS; do scp /etc/kubernetes/pki/ca.crt $host: scp /etc/kubernetes/pki/ca.key $host: scp /etc/kubernetes/pki/sa.key $host: scp /etc/kubernetes/pki/sa.pub $host: scp /etc/kubernetes/pki/front-proxy-ca.crt $host: scp /etc/kubernetes/pki/front-proxy-ca.key $host: scp /etc/kubernetes/pki/etcd/ca.crt $host:etcd-ca.crt scp /etc/kubernetes/pki/etcd/ca.key $host:etcd-ca.key scp /etc/kubernetes/admin.conf $host: done
6. 加入k8s集群(第二台)
在第二台Master上执行以下命令
cat << EOF > ~/kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.1 apiServerCertSANs: - "$LOAD_BALANCER_DNS" api: controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT" etcd: local: extraArgs: listen-client-urls: "https://127.0.0.1:2379,https://$CP2_IP:2379" advertise-client-urls: "https://$CP2_IP:2379" listen-peer-urls: "https://$CP2_IP:2380" initial-advertise-peer-urls: "https://$CP2_IP:2380" initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380" initial-cluster-state: existing name: $CP2_HOSTNAME serverCertSANs: - $CP2_HOSTNAME - $CP2_IP peerCertSANs: - $CP2_HOSTNAME - $CP2_IP networking: podSubnet: "10.244.0.0/16" EOF
将证书移到相应目录
mkdir -p /etc/kubernetes/pki/etcd mv ~/ca.crt /etc/kubernetes/pki/ mv ~/ca.key /etc/kubernetes/pki/ mv ~/sa.pub /etc/kubernetes/pki/ mv ~/sa.key /etc/kubernetes/pki/ mv ~/front-proxy-ca.crt /etc/kubernetes/pki/ mv ~/front-proxy-ca.key /etc/kubernetes/pki/ mv ~/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt mv ~/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key mv ~/admin.conf /etc/kubernetes/admin.conf
配置并启动kubelet
kubeadm alpha phase certs all --config ~/kubeadm-config.yaml kubeadm alpha phase kubelet config write-to-disk --config ~/kubeadm-config.yaml kubeadm alpha phase kubelet write-env-file --config ~/kubeadm-config.yaml kubeadm alpha phase kubeconfig kubelet --config ~/kubeadm-config.yaml systemctl start kubelet
加入etcd集群
kubectl exec -n kube-system etcd-${CP1_HOSTNAME} -- etcdctl \ --ca-file /etc/kubernetes/pki/etcd/ca.crt \ --cert-file /etc/kubernetes/pki/etcd/peer.crt \ --key-file /etc/kubernetes/pki/etcd/peer.key \ --endpoints=https://${CP1_IP}:2379 \ member add ${CP2_HOSTNAME} https://${CP2_IP}:2380 kubeadm alpha phase etcd local --config ~/kubeadm-config.yaml
将节点配置为master
kubeadm alpha phase kubeconfig all --config ~/kubeadm-config.yaml kubeadm alpha phase controlplane all --config ~/kubeadm-config.yaml kubeadm alpha phase mark-master --config ~/kubeadm-config.yaml
7. 加入k8s集群(第三台)
在第三台Master上执行以下命令
cat << EOF > ~/kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.1 apiServerCertSANs: - "$LOAD_BALANCER_DNS" api: controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT" etcd: local: extraArgs: listen-client-urls: "https://127.0.0.1:2379,https://$CP3_IP:2379" advertise-client-urls: "https://$CP3_IP:2379" listen-peer-urls: "https://$CP3_IP:2380" initial-advertise-peer-urls: "https://$CP3_IP:2380" initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380,$CP3_HOSTNAME=https://$CP3_IP:2380" initial-cluster-state: existing name: $CP3_HOSTNAME serverCertSANs: - $CP3_HOSTNAME - $CP3_IP peerCertSANs: - $CP3_HOSTNAME - $CP3_IP networking: podSubnet: "10.244.0.0/16" EOF
将证书移到相应目录
mkdir -p /etc/kubernetes/pki/etcd mv ~/ca.crt /etc/kubernetes/pki/ mv ~/ca.key /etc/kubernetes/pki/ mv ~/sa.pub /etc/kubernetes/pki/ mv ~/sa.key /etc/kubernetes/pki/ mv ~/front-proxy-ca.crt /etc/kubernetes/pki/ mv ~/front-proxy-ca.key /etc/kubernetes/pki/ mv ~/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt mv ~/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key mv ~/admin.conf /etc/kubernetes/admin.conf
配置并启动kubelet
kubeadm alpha phase certs all --config ~/kubeadm-config.yaml kubeadm alpha phase kubelet config write-to-disk --config ~/kubeadm-config.yaml kubeadm alpha phase kubelet write-env-file --config ~/kubeadm-config.yaml kubeadm alpha phase kubeconfig kubelet --config ~/kubeadm-config.yaml systemctl start kubelet
加入etcd集群
kubectl exec -n kube-system etcd-${CP1_HOSTNAME} -- etcdctl \ --ca-file /etc/kubernetes/pki/etcd/ca.crt \ --cert-file /etc/kubernetes/pki/etcd/peer.crt \ --key-file /etc/kubernetes/pki/etcd/peer.key \ --endpoints=https://${CP1_IP}:2379 \ member add ${CP3_HOSTNAME} https://${CP3_IP}:2380 kubeadm alpha phase etcd local --config ~/kubeadm-config.yaml
将节点配置为master
kubeadm alpha phase kubeconfig all --config ~/kubeadm-config.yaml kubeadm alpha phase controlplane all --config ~/kubeadm-config.yaml kubeadm alpha phase mark-master --config ~/kubeadm-config.yaml
8. 配置网络插件(以flannel为例)
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml
9. 将worker节点加入集群
下载kubernetes镜像
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 docker pull mirrorgooglecontainers/pause-amd64:3.1 docker pull quay.io/coreos/flannel:v0.10.0-amd64 docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1 docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
安装kubelet、kubeadm
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet-1.11.1 kubeadm-1.11.1 systemctl enable kubelet
将worker加入k8s集群,使用在初始化第一台Master时生成的命令,如
kubeadm join k8s.test.local:8443 --token bqnani.kwxe3y34vy22xnhm --discovery-token-ca-cert-hash sha256:b6146fea7a63d3a66e406c12f55f8d99537db99880409939e4aba206300e06cc
10. 确认集群运行状态
确认etcd运行状态
docker run --rm -it \ --net host \ -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd-amd64:3.2.18 etcdctl \ --cert-file /etc/kubernetes/pki/etcd/peer.crt \ --key-file /etc/kubernetes/pki/etcd/peer.key \ --ca-file /etc/kubernetes/pki/etcd/ca.crt \ --endpoints https://$CP1_IP:2379 cluster-health
若正常应返回类似如下结果
member 4fea45cc5063c213 is healthy: got healthy result from https://192.168.1.101:2379 member 963074f50ce23d9a is healthy: got healthy result from https://192.168.1.102:2379 member 9a186be7d1ea4bbe is healthy: got healthy result from https://192.168.1.103:2379
确认k8s集群nodes运行状态
kubectl get nodes
若正常应返回类似如下结果
NAME STATUS ROLES AGE VERSION k8s-master1.test.local Ready master 1d v1.11.1 k8s-master2.test.local Ready master 1d v1.11.1 k8s-master3.test.local Ready master 1d v1.11.1 k8s-worker1.test.local Ready1d v1.11.1 k8s-worker2.test.local Ready 1d v1.11.1 k8s-worker3.test.local Ready 1d v1.11.1
确认k8s集群pods运行状态
kubectl get pods -n kube-system
若正常应返回类似如下结果,其中etcd、kube-apiserver、kube-controller-manager和kube-scheduler应该各有三个,coredns默认有两个,kube-proxy和kube-flannel的数量应和node数量一致,本示例中为6个
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-j6cpl 1/1 Running 0 1d kube-system coredns-78fcdf6894-kgqp7 1/1 Running 0 1d kube-system etcd-k8s-master1.test.local 1/1 Running 0 1d kube-system etcd-k8s-master2.test.local 1/1 Running 0 1d kube-system etcd-k8s-master3.test.local 1/1 Running 0 1d kube-system kube-apiserver-k8s-master1.test.local 1/1 Running 0 1d kube-system kube-apiserver-k8s-master2.test.local 1/1 Running 0 1d kube-system kube-apiserver-k8s-master3.test.local 1/1 Running 0 1d kube-system kube-controller-manager-k8s-master1.test.local 1/1 Running 0 1d kube-system kube-controller-manager-k8s-master2.test.local 1/1 Running 0 1d kube-system kube-controller-manager-k8s-master3.test.local 1/1 Running 0 1d kube-system kube-flannel-ds-amd64-2r7jp 1/1 Running 0 1d kube-system kube-flannel-ds-amd64-d5vlw 1/1 Running 0 1d kube-system kube-flannel-ds-amd64-qd5x6 1/1 Running 0 1d kube-system kube-flannel-ds-amd64-wzl26 1/1 Running 0 1d kube-system kube-flannel-ds-amd64-xklr6 1/1 Running 0 1d kube-system kube-flannel-ds-amd64-4jr5v 1/1 Running 0 1d kube-system kube-proxy-8gmdd 1/1 Running 0 1d kube-system kube-proxy-8rs8m 1/1 Running 0 1d kube-system kube-proxy-pm6tq 1/1 Running 0 1d kube-system kube-proxy-shsjv 1/1 Running 0 1d kube-system kube-proxy-vj5gk 1/1 Running 0 1d kube-system kube-proxy-wd8xj 1/1 Running 0 1d kube-system kube-scheduler-k8s-master1.test.local 1/1 Running 0 1d kube-system kube-scheduler-k8s-master2.test.local 1/1 Running 0 1d kube-system kube-scheduler-k8s-master3.test.local 1/1 Running 0 1d
加入etcd集群有问题,在哪配置的etcd hostname
如果主机名用的是FQDN,/etc/hosts里也应该要写上FQDN名称
Pingback引用通告: Kubernetes Dashboard安装 | eBanBan Studio
apiServerCertSANs:
– “$LOAD_BALANCER_DNS”
$LOAD_BALANCER_DNS可以是域名吗 自定义域名
在前面的命令里有这个变量的信息“export LOAD_BALANCER_DNS=k8s.test.local”,k8s.test.local是这个浮动IP对应的域名,可以自定义,只要DNS或hosts能解析到就行