分类目录归档:Kubernetes

基于kube-vip创建kubernetes高可用集群

以前写过使用keepalived来创建kubernetes高可用集群,现在有了kube-vip方案,因此将安装方法更新下:

所使用的环境如下:
Ubuntu Server 20.04 LTS (自从CentOS变成CentOS Stream后就转用Debian/Ubuntu了)
containerd 1.5.5 (k8s 1.24之后就不再支持docker了,因此改用containerd)
Kubernetes v1.23.5
kube-vip v0.4.3 (这里为了简单部署使用L2 ARP方式)

继续阅读

Ubuntu上安装MicroK8s

什么是MicroK8s

MicroK8s是一款功能强大,重量轻,可靠的生产型Kubernetes衍生版。 它是一种企业级Kubernetes发行版,具有较小的磁盘和内存占用空间,同时提供开箱即用的生产级附加组件,如Istio,Knative,Grafana,Cilium等。MicroK8s是最小,最快的多节点Kubernetes。
非常适合使用在小型VPS或者IoT设备,如树莓派等。不建议使用MicroK8s来学习Kubernetes。
官方网站:https://microk8s.io

继续阅读

Kubernetes社区的Ingress Controller部署

该NGINX Ingress Controller为Kubernetes社区制作的(https://github.com/kubernetes/ingress-nginx),与之前写的NGINX公司制作的Ingress Controller(https://github.com/nginxinc/kubernetes-ingress)配置上不一样

安装非常的简单,执行下面的命令即可

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

如果不是部署在云上,可以使用以下命令开启NodePort

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

安装完成后使用以下命令检测ingress容器状态

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch

使用以下命令可检测所安装的版本

POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version

典型的Ingress配置文件如下

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1
          servicePort: 80
  - host: bar.foo.com
    http:
      paths:
      - backend:
          serviceName: s2
          servicePort: 80

Dashboard的Ingress配置,k8s-dashboard-secret需先创建

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k8s-dashboard
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/secure-backends: "true"

spec:
  tls:
   - secretName: k8s-dashboard-secret
  rules:
   - http:
      paths:
      - path: /dashboard
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

使用kubeadm升级高可用Kubernetes集群

本教程将演示使用kubeadm将3台master的kubernetes集群从v1.11.1版本升级至v1.12.1版本
下载最新kubernetes镜像(如有梯子可以跳过),若要升级后续版本则将版本号改为对应版本号,worker节点只需kube-proxy

export VERSION=v1.12.1
docker pull mirrorgooglecontainers/kube-apiserver:${VERSION}
docker pull mirrorgooglecontainers/kube-scheduler:${VERSION}
docker pull mirrorgooglecontainers/kube-proxy:${VERSION}
docker pull mirrorgooglecontainers/kube-controller-manager:${VERSION}
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.2

docker tag mirrorgooglecontainers/kube-apiserver:${VERSION} k8s.gcr.io/kube-apiserver:${VERSION}
docker tag mirrorgooglecontainers/kube-scheduler:${VERSION} k8s.gcr.io/kube-scheduler:${VERSION}
docker tag mirrorgooglecontainers/kube-proxy:${VERSION} k8s.gcr.io/kube-proxy:${VERSION}
docker tag mirrorgooglecontainers/kube-controller-manager:${VERSION} k8s.gcr.io/kube-controller-manager:${VERSION}
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
docker rmi coredns/coredns:1.2.2

安装新版kubeadm

export VERSION=1.12.1
yum install -y kubeadm-${VERSION}

在第一个master节点上执行如下命令

kubeadm upgrade plan

会得到如下结果

[upgrade/versions] Latest version in the v1.11 series: v1.12.1

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     6 x v1.11.1   v1.12.1

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.1   v1.12.1
Controller Manager   v1.11.1   v1.12.1
Scheduler            v1.11.1   v1.12.1
Kube Proxy           v1.11.1   v1.12.1
CoreDNS              1.1.3     1.2.2
Etcd                 3.2.18    3.2.24

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.12.1

由于k8s高可用集群里的master节点名称使用的是浮动IP,在升级时需要先改为实际IP

kubectl get configmap -n kube-system kubeadm-config -o yaml >kubeadm-config-cm.yaml

修改kubeadm-config-cm.yaml文件,将以下字段信息改为当前节点的IP地址
api.advertiseAddress
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
在etcd.local.extraArgs增加一个参数:initial-cluster-state: existing
将etcd.local.extraArgs.initial-cluster改为etcd集群地址信息,如

initial-cluster: k8s1=https://192.168.1.101:2380,k8s2=https://192.168.1.102:2380,k8s3=https://192.168.1.103:2380

修改完成后应用新配置文件

kubectl apply -f kubeadm-config-cm.yaml --force

执行以下命令开始升级

kubeadm upgrade apply v$VERSION

看到以下提示表示升级完成

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.1". Enjoy!

主节点升级完成后继续升级剩余节点

kubectl get configmap -n kube-system kubeadm-config -o yaml >kubeadm-config-cm.yaml

修改kubeadm-config-cm.yaml文件,将以下字段信息改为当前节点的IP地址
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
将etcd.local.extraArgs.name改为当前节点主机名
在ClusterStatus.apiEndpoints中加入当前节点信息,如:

  ClusterStatus: |
    apiEndpoints:
      k8s1.test.local:
        advertiseAddress: 192.168.1.101
        bindPort: 6443
      k8s2.test.local:
        advertiseAddress: 192.168.1.102
        bindPort: 6443

修改完成后应用新配置文件

kubectl apply -f kubeadm-config-cm.yaml --force

再为当前节点创建cri-socket注释

kubectl annotate node <nodename> kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock

执行以下命令开始升级

kubeadm upgrade apply v$VERSION

Master节点全部升级完成后,手工安装新版kubelet、kubectl

export VERSION=1.12.1
yum install -y kubelet-${VERSION} kubectl-${VERSION}
systemctl daemon-reload
systemctl restart kubelet

 

NGINX Inc的Ingress Controller部署

Ingress作为一种API对象,用来管理从外部对集群内服务器的访问。Ingress可以提供负载均衡、SSL截止和虚拟主机服务等。
基于NGINX的Ingress Controller有两个版本,一个是NGINX公司做的,还有个是kubernetes社区做的,他们的区别可以在这里查看。本文将介绍nginx公司制作的NGINX Ingress Controller。
创建Namespace及Service Account

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/common/ns-and-sa.yaml

创建TLS证书及私钥,以下使用了示例的证书和私钥,建议自己生成

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/common/default-server-secret.yaml

创建Config Map

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/common/nginx-config.yaml

创建RBAC

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/rbac/rbac.yaml

部署Ingress Controller,下载image

docker pull nginx/nginx-ingress:alpine

Ingress Controller有两种部署方式:

  • Deployment:使用Deployment可以动态调整Ingress Controller的replica数量
  • DaemonSet:使用DaemonSet可以使Ingress Controller运行在每台node或一组node之中

1.使用Deployment部署

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:alpine
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
EOF

2.使用DaemonSet部署

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:alpine
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
EOF

确认Ingress Controller运行状态

kubectl get pods --namespace=nginx-ingress

如果部署方式是DaemonSet,则Ingress Controller的80和443端口将映射到Node的相同端口,访问Ingress Controller时,使用任意Node的IP加端口即可访问。
如果部署方式是Deployment,则需要创建基于NodePort的Service来访问(也可以使用LoadBalancer),方法如下:

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/service/nodeport.yaml

若要卸载Ingress Controller,直接删除整个命名空间即可

kubectl delete namespace nginx-ingress

Kubernetes Dashboard v1.10.0安装

下载Kubernetes Dashboard镜像

docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Dashboard有多种方式可以访问:

  • kubectl proxy方式:只支持127.0.0.1和localhost为来源地址的方式访问,需要配置SSH隧道,比较麻烦,不建议使用
  • Node Port方式:该方式只建议在开发环境的环境中使用
  • Ingress方式:通过Ingress Controller来暴露应用,比较灵活,是最推荐的方式,但较复杂,以下将做详细说明
  • API Server方式:由于API服务器是公开的,可以从外部访问,是比较推荐的方式,以下将做详细说明

方法一:通过Ingress访问Dashboard

先按照该教程部署Ingress Controller:NGINX Ingress Controller部署
本示例中NGINX Ingress Controller使用DaemonSet方式部署,好处是不用去找NodePort的端口号。示例的Ingress主机域名为dashboard.test.local,在DNS中配置该A记录对应任意worker节点地址即可访问。
 
创建dashboard的TLS证书,若有CA颁发的证书可跳过这步
通过openssl生成了域名为dashboard.test.local的10年(3650天)自签名证书

openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=dashboard.test.local/O=dashboard.test.local"

基于以上创建的证书生成secret

kubectl create secret tls kubernetes-dashboard-certs --key tls.key --cert tls.crt -n kube-system

创建Ingress

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-dashboard
  namespace: kube-system
  annotations:
    nginx.org/ssl-services: "kubernetes-dashboard" 
spec:
  rules:
  - host: dashboard.test.local
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
  tls:
  - hosts:
    - dashboard.test.local
    secretName: kubernetes-dashboard-certs
EOF

然后使用浏览器访问https://dashboard.test.local即可访问Kubernetes Dashboard了

方法二:通过API Server访问Dashboard

通过API Server访问Dashboard的地址为:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
若k8s为集群高可用部署,则使用master vip地址和相应的端口
根据之前k8s部署教程,示例的访问地址为https://k8s.test.local:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/。但是直接访问该网址会返回Anonymous Forbidden的错误,是由于RBAC给未认证用户分配的默认身份没有访问权限的关系。可以通过k8s的admin.conf来生成证书,方法如下:

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

将生成的证书导入浏览器,并重启浏览器再次访问Dashboard地址,登录时会显示选择证书,点击相应的证书便可进行登录

选择证书后,便出现了登录界面,在此选择Token,Token的生成方法见下面的步骤

用户登录

Kubernetes使用token进行用户认证,为了正确访问Dashboard,需要创建相应的信息
创建admin-user用户

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

再获取admin-user的token,token为图红框中那串base64字符串(为了方便大家,以下命令直接输出token,没有多余信息)

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | awk '$1=="token:"{print $2}'


将获得token粘贴到刚才的token输入框里再点击Sign in便可顺利看到dashboard界面了

使用kubeadm部署高可用Kubernetes v1.11.1集群

1. 环境配置
生产环境部署高可用Kubernetes环境时至少配置三台Master节点,如有更高要求可以再增加,但Master节点数量应为奇数

+-------------+---------------+---------------------------+
|   Hostname  |   IP Address  |           Role            |
+-------------+---------------+---------------------------+
| k8s         | 192.168.1.100 | VIP                       |
+-------------+---------------+---------------------------+
| k8s-master1 | 192.168.1.101 | Master,Keepalived,HAProxy |
+-------------+---------------+---------------------------+
| k8s-master2 | 192.168.1.102 | Master,Keepalived,HAProxy |
+-------------+---------------+---------------------------+
| k8s-master3 | 192.168.1.103 | Master,Keepalived,HAProxy |
+-------------+---------------+---------------------------+
| k8s-worker1 | 192.168.1.104 | Worker                    |
+-------------+---------------+---------------------------+
| k8s-worker2 | 192.168.1.105 | Worker                    |
+-------------+---------------+---------------------------+
| k8s-worker3 | 192.168.1.106 | Worker                    |
+-------------+---------------+---------------------------+

在所有节点加入hosts信息,如有DNS记录则不用

cat <<EOF >> /etc/hosts
192.168.1.100 k8s k8s.test.local
192.168.1.101 k8s-master1 k8s-master1.test.local
192.168.1.102 k8s-master2 k8s-master2.test.local
192.168.1.103 k8s-master3 k8s-master3.test.local
192.168.1.104 k8s-worker1 k8s-worker1.test.local
192.168.1.105 k8s-worker2 k8s-worker2.test.local
192.168.1.106 k8s-worker3 k8s-worker3.test.local
EOF

安装Docker,步骤略可参考https://www.ebanban.com/?p=496

在所有Master节点上输入以下环境变量,主机名和IP信息根据自己的实际的情况进行修改

export KUBECONFIG=/etc/kubernetes/admin.conf
export LOAD_BALANCER_DNS=k8s.test.local
export LOAD_BALANCER_PORT=8443
export CP1_HOSTNAME=k8s-master1.test.local
export CP2_HOSTNAME=k8s-master2.test.local
export CP3_HOSTNAME=k8s-master3.test.local
export VIP_IP=192.168.1.100
export CP1_IP=192.168.1.101
export CP2_IP=192.168.1.102
export CP3_IP=192.168.1.103

关闭防火墙、关闭swap、关闭SELinux、调整内核参数

sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo setenforce 0
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

从mirrorgooglecontainers源下载kubernetes镜像

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
docker pull quay.io/coreos/flannel:v0.10.0-amd64

将镜像标记为k8s.gcr.io的名称

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

删除无用镜像名

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
docker rmi coredns/coredns:1.1.3

安装、配置kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1
systemctl enable kubelet

2. 准备SSH Keys
生成SSH Key(通常在第一台Master上操作,可以在终端上操作)

ssh-keygen -t rsa -b 2048 -f /root/.ssh/id_rsa -N ""

将SSH Key复制给其他主机

for host in {$CP1_HOSTNAME,$CP2_HOSTNAME,$CP3_HOSTNAME}; do ssh-copy-id $host; done

3. 部署keepalived(Master节点)
keepalived用于生产浮动的虚拟IP,并将浮动IP分配给优先级最高且haproxy正常运行的节点
在第一台Master上配置和启动keepalived,若网卡名称不为示例中的eth0则改为对应名称

yum install -y keepalived curl psmisc && systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight 20
} 
vrrp_instance haproxy-vip {
    state BACKUP
    priority 102
    interface eth0
    virtual_router_id 51
    advert_int 3
    unicast_src_ip $CP1_IP
    unicast_peer {
        $CP2_IP
        $CP3_IP
    }
    virtual_ipaddress {
        $VIP_IP
    }
    track_script {
        haproxy-check weight 20
    }
}
EOF
systemctl start keepalived

在第二台Master上配置和启动keepalived

yum install -y keepalived curl psmisc && systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight 20
}
vrrp_instance haproxy-vip {
    state BACKUP
    priority 101
    interface eth0
    virtual_router_id 51
    advert_int 3
    unicast_src_ip $CP2_IP
    unicast_peer {
        $CP1_IP
        $CP3_IP
    }
    virtual_ipaddress {
        $VIP_IP
    }
    track_script {
        haproxy-check weight 20
    }
}
EOF
systemctl start keepalived

在第三台主机上配置和启动keepalived

yum install -y keepalived curl psmisc && systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight 20
}
vrrp_instance haproxy-vip {
    state BACKUP
    priority 100
    interface eth0
    virtual_router_id 51
    advert_int 3
    unicast_src_ip $CP3_IP
    unicast_peer {
        $CP1_IP
        $CP2_IP
    }
    virtual_ipaddress {
        $VIP_IP
    }
    track_script {
        haproxy-check weight 20
    }
}
EOF
systemctl start keepalived

4. 部署HAProxy(Master节点)
HAProxy用于检测集群内api-server的健康状况并进行负载均衡
在三台Master节点上执行以下命令安装和启用HAProxy

yum install -y haproxy && systemctl enable haproxy
cat << EOF > /etc/haproxy/haproxy.cfg
global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  tune.ssl.default-dh-param 2048

defaults
  log global
  mode http
  option dontlognull
  timeout connect 5000ms
  timeout client  600000ms
  timeout server  600000ms

listen stats
    bind :9090
    mode http
    balance
    stats uri /haproxy_stats
    stats auth admin:admin
    stats admin if TRUE

frontend kube-apiserver-https
   mode tcp
   bind :8443
   default_backend kube-apiserver-backend

backend kube-apiserver-backend
    mode tcp
    balance roundrobin
    stick-table type ip size 200k expire 30m
    stick on src
    server k8s-master1 192.168.1.101:6443 check
    server k8s-master2 192.168.1.102:6443 check
    server k8s-master3 192.168.1.103:6443 check
EOF
systemctl start haproxy

5. 初始化k8s集群(第一台)
在第一台Master上执行以下命令,10.244.0.0/16是flannel的CIDR地址,如果用其他CNI需要改成对应的CIDR。

cat << EOF > ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
apiServerCertSANs:
- "$LOAD_BALANCER_DNS"
api:
  controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP1_IP:2379"
      advertise-client-urls: "https://$CP1_IP:2379"
      listen-peer-urls: "https://$CP1_IP:2380"
      initial-advertise-peer-urls: "https://$CP1_IP:2380"
      initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380"
      name: $CP1_HOSTNAME
    serverCertSANs:
      - $CP1_HOSTNAME
      - $CP1_IP
    peerCertSANs:
      - $CP1_HOSTNAME
      - $CP1_IP
networking:
    podSubnet: "10.244.0.0/16"
EOF

初始化第一台master,初始化完成后记录生成的kubeadm join命令(包含token),用于后面worker节点加入时使用

kubeadm init --config ~/kubeadm-config.yaml

将相关证书文件复制到其他master节点

CONTROL_PLANE_HOSTS="$CP2_HOSTNAME $CP3_HOSTNAME"
for host in $CONTROL_PLANE_HOSTS; do
    scp /etc/kubernetes/pki/ca.crt $host:
    scp /etc/kubernetes/pki/ca.key $host:
    scp /etc/kubernetes/pki/sa.key $host:
    scp /etc/kubernetes/pki/sa.pub $host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt $host:
    scp /etc/kubernetes/pki/front-proxy-ca.key $host:
    scp /etc/kubernetes/pki/etcd/ca.crt $host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key $host:etcd-ca.key
    scp /etc/kubernetes/admin.conf $host:
done

6. 加入k8s集群(第二台)
在第二台Master上执行以下命令

cat << EOF > ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
apiServerCertSANs:
- "$LOAD_BALANCER_DNS"
api:
    controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP2_IP:2379"
      advertise-client-urls: "https://$CP2_IP:2379"
      listen-peer-urls: "https://$CP2_IP:2380"
      initial-advertise-peer-urls: "https://$CP2_IP:2380"
      initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380"
      initial-cluster-state: existing
      name: $CP2_HOSTNAME
    serverCertSANs:
      - $CP2_HOSTNAME
      - $CP2_IP
    peerCertSANs:
      - $CP2_HOSTNAME
      - $CP2_IP
networking:
    podSubnet: "10.244.0.0/16"
EOF

将证书移到相应目录

mkdir -p /etc/kubernetes/pki/etcd
mv ~/ca.crt /etc/kubernetes/pki/
mv ~/ca.key /etc/kubernetes/pki/
mv ~/sa.pub /etc/kubernetes/pki/
mv ~/sa.key /etc/kubernetes/pki/
mv ~/front-proxy-ca.crt /etc/kubernetes/pki/
mv ~/front-proxy-ca.key /etc/kubernetes/pki/
mv ~/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv ~/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv ~/admin.conf /etc/kubernetes/admin.conf

配置并启动kubelet

kubeadm alpha phase certs all --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config ~/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config ~/kubeadm-config.yaml
systemctl start kubelet

加入etcd集群

kubectl exec -n kube-system etcd-${CP1_HOSTNAME} -- etcdctl \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--endpoints=https://${CP1_IP}:2379 \
member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
kubeadm alpha phase etcd local --config ~/kubeadm-config.yaml

将节点配置为master

kubeadm alpha phase kubeconfig all --config ~/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config ~/kubeadm-config.yaml
kubeadm alpha phase mark-master --config ~/kubeadm-config.yaml

7. 加入k8s集群(第三台)
在第三台Master上执行以下命令

cat << EOF > ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
apiServerCertSANs:
- "$LOAD_BALANCER_DNS"
api:
    controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP3_IP:2379"
      advertise-client-urls: "https://$CP3_IP:2379"
      listen-peer-urls: "https://$CP3_IP:2380"
      initial-advertise-peer-urls: "https://$CP3_IP:2380"
      initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380,$CP3_HOSTNAME=https://$CP3_IP:2380"
      initial-cluster-state: existing
      name: $CP3_HOSTNAME
    serverCertSANs:
      - $CP3_HOSTNAME
      - $CP3_IP
    peerCertSANs:
      - $CP3_HOSTNAME
      - $CP3_IP
networking:
    podSubnet: "10.244.0.0/16"
EOF

将证书移到相应目录

mkdir -p /etc/kubernetes/pki/etcd
mv ~/ca.crt /etc/kubernetes/pki/
mv ~/ca.key /etc/kubernetes/pki/
mv ~/sa.pub /etc/kubernetes/pki/
mv ~/sa.key /etc/kubernetes/pki/
mv ~/front-proxy-ca.crt /etc/kubernetes/pki/
mv ~/front-proxy-ca.key /etc/kubernetes/pki/
mv ~/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv ~/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv ~/admin.conf /etc/kubernetes/admin.conf

配置并启动kubelet

kubeadm alpha phase certs all --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config ~/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config ~/kubeadm-config.yaml
systemctl start kubelet

加入etcd集群

kubectl exec -n kube-system etcd-${CP1_HOSTNAME} -- etcdctl \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--endpoints=https://${CP1_IP}:2379 \
member add ${CP3_HOSTNAME} https://${CP3_IP}:2380
kubeadm alpha phase etcd local --config ~/kubeadm-config.yaml

将节点配置为master

kubeadm alpha phase kubeconfig all --config ~/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config ~/kubeadm-config.yaml
kubeadm alpha phase mark-master --config ~/kubeadm-config.yaml

8. 配置网络插件(以flannel为例)

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

9. 将worker节点加入集群
下载kubernetes镜像

docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh

安装kubelet、kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.11.1 kubeadm-1.11.1
systemctl enable kubelet

将worker加入k8s集群,使用在初始化第一台Master时生成的命令,如

kubeadm join k8s.test.local:8443 --token bqnani.kwxe3y34vy22xnhm --discovery-token-ca-cert-hash sha256:b6146fea7a63d3a66e406c12f55f8d99537db99880409939e4aba206300e06cc

10. 确认集群运行状态
确认etcd运行状态

docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd-amd64:3.2.18 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://$CP1_IP:2379 cluster-health

若正常应返回类似如下结果

member 4fea45cc5063c213 is healthy: got healthy result from https://192.168.1.101:2379
member 963074f50ce23d9a is healthy: got healthy result from https://192.168.1.102:2379
member 9a186be7d1ea4bbe is healthy: got healthy result from https://192.168.1.103:2379

确认k8s集群nodes运行状态

kubectl get nodes

若正常应返回类似如下结果

NAME                     STATUS    ROLES     AGE       VERSION
k8s-master1.test.local   Ready     master    1d        v1.11.1
k8s-master2.test.local   Ready     master    1d        v1.11.1
k8s-master3.test.local   Ready     master    1d        v1.11.1
k8s-worker1.test.local   Ready         1d        v1.11.1
k8s-worker2.test.local   Ready         1d        v1.11.1
k8s-worker3.test.local   Ready         1d        v1.11.1

确认k8s集群pods运行状态

kubectl get pods -n kube-system

若正常应返回类似如下结果,其中etcd、kube-apiserver、kube-controller-manager和kube-scheduler应该各有三个,coredns默认有两个,kube-proxy和kube-flannel的数量应和node数量一致,本示例中为6个

NAMESPACE       NAME                                            READY     STATUS    RESTARTS   AGE
kube-system     coredns-78fcdf6894-j6cpl                        1/1       Running   0          1d
kube-system     coredns-78fcdf6894-kgqp7                        1/1       Running   0          1d
kube-system     etcd-k8s-master1.test.local                     1/1       Running   0          1d
kube-system     etcd-k8s-master2.test.local                     1/1       Running   0          1d
kube-system     etcd-k8s-master3.test.local                     1/1       Running   0          1d
kube-system     kube-apiserver-k8s-master1.test.local           1/1       Running   0          1d
kube-system     kube-apiserver-k8s-master2.test.local           1/1       Running   0          1d
kube-system     kube-apiserver-k8s-master3.test.local           1/1       Running   0          1d
kube-system     kube-controller-manager-k8s-master1.test.local  1/1       Running   0          1d
kube-system     kube-controller-manager-k8s-master2.test.local  1/1       Running   0          1d
kube-system     kube-controller-manager-k8s-master3.test.local  1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-2r7jp                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-d5vlw                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-qd5x6                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-wzl26                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-xklr6                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-4jr5v                     1/1       Running   0          1d
kube-system     kube-proxy-8gmdd                                1/1       Running   0          1d
kube-system     kube-proxy-8rs8m                                1/1       Running   0          1d
kube-system     kube-proxy-pm6tq                                1/1       Running   0          1d
kube-system     kube-proxy-shsjv                                1/1       Running   0          1d
kube-system     kube-proxy-vj5gk                                1/1       Running   0          1d
kube-system     kube-proxy-wd8xj                                1/1       Running   0          1d
kube-system     kube-scheduler-k8s-master1.test.local           1/1       Running   0          1d
kube-system     kube-scheduler-k8s-master2.test.local           1/1       Running   0          1d
kube-system     kube-scheduler-k8s-master3.test.local           1/1       Running   0          1d

重新安装kubernetes时需要执行的命令

执行以下命令来清除kubernetes的配置以及桥接网卡等

kubeadm reset -f
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker