NGINX Inc的Ingress Controller部署

Ingress作为一种API对象,用来管理从外部对集群内服务器的访问。Ingress可以提供负载均衡、SSL截止和虚拟主机服务等。
基于NGINX的Ingress Controller有两个版本,一个是NGINX公司做的,还有个是kubernetes社区做的,他们的区别可以在这里查看。本文将介绍nginx公司制作的NGINX Ingress Controller。
创建Namespace及Service Account

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/common/ns-and-sa.yaml

创建TLS证书及私钥,以下使用了示例的证书和私钥,建议自己生成

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/common/default-server-secret.yaml

创建Config Map

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/common/nginx-config.yaml

创建RBAC

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/rbac/rbac.yaml

部署Ingress Controller,下载image

docker pull nginx/nginx-ingress:alpine

Ingress Controller有两种部署方式:

  • Deployment:使用Deployment可以动态调整Ingress Controller的replica数量
  • DaemonSet:使用DaemonSet可以使Ingress Controller运行在每台node或一组node之中

1.使用Deployment部署

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:alpine
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
EOF

2.使用DaemonSet部署

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:alpine
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
EOF

确认Ingress Controller运行状态

kubectl get pods --namespace=nginx-ingress

如果部署方式是DaemonSet,则Ingress Controller的80和443端口将映射到Node的相同端口,访问Ingress Controller时,使用任意Node的IP加端口即可访问。
如果部署方式是Deployment,则需要创建基于NodePort的Service来访问(也可以使用LoadBalancer),方法如下:

kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/install/service/nodeport.yaml

若要卸载Ingress Controller,直接删除整个命名空间即可

kubectl delete namespace nginx-ingress

Kubernetes Dashboard v1.10.0安装

下载Kubernetes Dashboard镜像

docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Dashboard有多种方式可以访问:

  • kubectl proxy方式:只支持127.0.0.1和localhost为来源地址的方式访问,需要配置SSH隧道,比较麻烦,不建议使用
  • Node Port方式:该方式只建议在开发环境的环境中使用
  • Ingress方式:通过Ingress Controller来暴露应用,比较灵活,是最推荐的方式,但较复杂,以下将做详细说明
  • API Server方式:由于API服务器是公开的,可以从外部访问,是比较推荐的方式,以下将做详细说明

方法一:通过Ingress访问Dashboard

先按照该教程部署Ingress Controller:NGINX Ingress Controller部署
本示例中NGINX Ingress Controller使用DaemonSet方式部署,好处是不用去找NodePort的端口号。示例的Ingress主机域名为dashboard.test.local,在DNS中配置该A记录对应任意worker节点地址即可访问。
 
创建dashboard的TLS证书,若有CA颁发的证书可跳过这步
通过openssl生成了域名为dashboard.test.local的10年(3650天)自签名证书

openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=dashboard.test.local/O=dashboard.test.local"

基于以上创建的证书生成secret

kubectl create secret tls kubernetes-dashboard-certs --key tls.key --cert tls.crt -n kube-system

创建Ingress

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-dashboard
  namespace: kube-system
  annotations:
    nginx.org/ssl-services: "kubernetes-dashboard" 
spec:
  rules:
  - host: dashboard.test.local
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
  tls:
  - hosts:
    - dashboard.test.local
    secretName: kubernetes-dashboard-certs
EOF

然后使用浏览器访问https://dashboard.test.local即可访问Kubernetes Dashboard了

方法二:通过API Server访问Dashboard

通过API Server访问Dashboard的地址为:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
若k8s为集群高可用部署,则使用master vip地址和相应的端口
根据之前k8s部署教程,示例的访问地址为https://k8s.test.local:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/。但是直接访问该网址会返回Anonymous Forbidden的错误,是由于RBAC给未认证用户分配的默认身份没有访问权限的关系。可以通过k8s的admin.conf来生成证书,方法如下:

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

将生成的证书导入浏览器,并重启浏览器再次访问Dashboard地址,登录时会显示选择证书,点击相应的证书便可进行登录

选择证书后,便出现了登录界面,在此选择Token,Token的生成方法见下面的步骤

用户登录

Kubernetes使用token进行用户认证,为了正确访问Dashboard,需要创建相应的信息
创建admin-user用户

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

再获取admin-user的token,token为图红框中那串base64字符串(为了方便大家,以下命令直接输出token,没有多余信息)

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | awk '$1=="token:"{print $2}'


将获得token粘贴到刚才的token输入框里再点击Sign in便可顺利看到dashboard界面了

使用kubeadm部署高可用Kubernetes v1.11.1集群

1. 环境配置
生产环境部署高可用Kubernetes环境时至少配置三台Master节点,如有更高要求可以再增加,但Master节点数量应为奇数

+-------------+---------------+---------------------------+
|   Hostname  |   IP Address  |           Role            |
+-------------+---------------+---------------------------+
| k8s         | 192.168.1.100 | VIP                       |
+-------------+---------------+---------------------------+
| k8s-master1 | 192.168.1.101 | Master,Keepalived,HAProxy |
+-------------+---------------+---------------------------+
| k8s-master2 | 192.168.1.102 | Master,Keepalived,HAProxy |
+-------------+---------------+---------------------------+
| k8s-master3 | 192.168.1.103 | Master,Keepalived,HAProxy |
+-------------+---------------+---------------------------+
| k8s-worker1 | 192.168.1.104 | Worker                    |
+-------------+---------------+---------------------------+
| k8s-worker2 | 192.168.1.105 | Worker                    |
+-------------+---------------+---------------------------+
| k8s-worker3 | 192.168.1.106 | Worker                    |
+-------------+---------------+---------------------------+

在所有节点加入hosts信息,如有DNS记录则不用

cat <<EOF >> /etc/hosts
192.168.1.100 k8s k8s.test.local
192.168.1.101 k8s-master1 k8s-master1.test.local
192.168.1.102 k8s-master2 k8s-master2.test.local
192.168.1.103 k8s-master3 k8s-master3.test.local
192.168.1.104 k8s-worker1 k8s-worker1.test.local
192.168.1.105 k8s-worker2 k8s-worker2.test.local
192.168.1.106 k8s-worker3 k8s-worker3.test.local
EOF

安装Docker,步骤略可参考https://www.ebanban.com/?p=496

在所有Master节点上输入以下环境变量,主机名和IP信息根据自己的实际的情况进行修改

export KUBECONFIG=/etc/kubernetes/admin.conf
export LOAD_BALANCER_DNS=k8s.test.local
export LOAD_BALANCER_PORT=8443
export CP1_HOSTNAME=k8s-master1.test.local
export CP2_HOSTNAME=k8s-master2.test.local
export CP3_HOSTNAME=k8s-master3.test.local
export VIP_IP=192.168.1.100
export CP1_IP=192.168.1.101
export CP2_IP=192.168.1.102
export CP3_IP=192.168.1.103

关闭防火墙、关闭swap、关闭SELinux、调整内核参数

sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo setenforce 0
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

从mirrorgooglecontainers源下载kubernetes镜像

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
docker pull quay.io/coreos/flannel:v0.10.0-amd64

将镜像标记为k8s.gcr.io的名称

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

删除无用镜像名

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
docker rmi coredns/coredns:1.1.3

安装、配置kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1
systemctl enable kubelet

2. 准备SSH Keys
生成SSH Key(通常在第一台Master上操作,可以在终端上操作)

ssh-keygen -t rsa -b 2048 -f /root/.ssh/id_rsa -N ""

将SSH Key复制给其他主机

for host in {$CP1_HOSTNAME,$CP2_HOSTNAME,$CP3_HOSTNAME}; do ssh-copy-id $host; done

3. 部署keepalived(Master节点)
keepalived用于生产浮动的虚拟IP,并将浮动IP分配给优先级最高且haproxy正常运行的节点
在第一台Master上配置和启动keepalived,若网卡名称不为示例中的eth0则改为对应名称

yum install -y keepalived curl psmisc && systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight 20
} 
vrrp_instance haproxy-vip {
    state BACKUP
    priority 102
    interface eth0
    virtual_router_id 51
    advert_int 3
    unicast_src_ip $CP1_IP
    unicast_peer {
        $CP2_IP
        $CP3_IP
    }
    virtual_ipaddress {
        $VIP_IP
    }
    track_script {
        haproxy-check weight 20
    }
}
EOF
systemctl start keepalived

在第二台Master上配置和启动keepalived

yum install -y keepalived curl psmisc && systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight 20
}
vrrp_instance haproxy-vip {
    state BACKUP
    priority 101
    interface eth0
    virtual_router_id 51
    advert_int 3
    unicast_src_ip $CP2_IP
    unicast_peer {
        $CP1_IP
        $CP3_IP
    }
    virtual_ipaddress {
        $VIP_IP
    }
    track_script {
        haproxy-check weight 20
    }
}
EOF
systemctl start keepalived

在第三台主机上配置和启动keepalived

yum install -y keepalived curl psmisc && systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_script haproxy-check {
    script "killall -0 haproxy"
    interval 2
    weight 20
}
vrrp_instance haproxy-vip {
    state BACKUP
    priority 100
    interface eth0
    virtual_router_id 51
    advert_int 3
    unicast_src_ip $CP3_IP
    unicast_peer {
        $CP1_IP
        $CP2_IP
    }
    virtual_ipaddress {
        $VIP_IP
    }
    track_script {
        haproxy-check weight 20
    }
}
EOF
systemctl start keepalived

4. 部署HAProxy(Master节点)
HAProxy用于检测集群内api-server的健康状况并进行负载均衡
在三台Master节点上执行以下命令安装和启用HAProxy

yum install -y haproxy && systemctl enable haproxy
cat << EOF > /etc/haproxy/haproxy.cfg
global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  tune.ssl.default-dh-param 2048

defaults
  log global
  mode http
  option dontlognull
  timeout connect 5000ms
  timeout client  600000ms
  timeout server  600000ms

listen stats
    bind :9090
    mode http
    balance
    stats uri /haproxy_stats
    stats auth admin:admin
    stats admin if TRUE

frontend kube-apiserver-https
   mode tcp
   bind :8443
   default_backend kube-apiserver-backend

backend kube-apiserver-backend
    mode tcp
    balance roundrobin
    stick-table type ip size 200k expire 30m
    stick on src
    server k8s-master1 192.168.1.101:6443 check
    server k8s-master2 192.168.1.102:6443 check
    server k8s-master3 192.168.1.103:6443 check
EOF
systemctl start haproxy

5. 初始化k8s集群(第一台)
在第一台Master上执行以下命令,10.244.0.0/16是flannel的CIDR地址,如果用其他CNI需要改成对应的CIDR。

cat << EOF > ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
apiServerCertSANs:
- "$LOAD_BALANCER_DNS"
api:
  controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP1_IP:2379"
      advertise-client-urls: "https://$CP1_IP:2379"
      listen-peer-urls: "https://$CP1_IP:2380"
      initial-advertise-peer-urls: "https://$CP1_IP:2380"
      initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380"
      name: $CP1_HOSTNAME
    serverCertSANs:
      - $CP1_HOSTNAME
      - $CP1_IP
    peerCertSANs:
      - $CP1_HOSTNAME
      - $CP1_IP
networking:
    podSubnet: "10.244.0.0/16"
EOF

初始化第一台master,初始化完成后记录生成的kubeadm join命令(包含token),用于后面worker节点加入时使用

kubeadm init --config ~/kubeadm-config.yaml

将相关证书文件复制到其他master节点

CONTROL_PLANE_HOSTS="$CP2_HOSTNAME $CP3_HOSTNAME"
for host in $CONTROL_PLANE_HOSTS; do
    scp /etc/kubernetes/pki/ca.crt $host:
    scp /etc/kubernetes/pki/ca.key $host:
    scp /etc/kubernetes/pki/sa.key $host:
    scp /etc/kubernetes/pki/sa.pub $host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt $host:
    scp /etc/kubernetes/pki/front-proxy-ca.key $host:
    scp /etc/kubernetes/pki/etcd/ca.crt $host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key $host:etcd-ca.key
    scp /etc/kubernetes/admin.conf $host:
done

6. 加入k8s集群(第二台)
在第二台Master上执行以下命令

cat << EOF > ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
apiServerCertSANs:
- "$LOAD_BALANCER_DNS"
api:
    controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP2_IP:2379"
      advertise-client-urls: "https://$CP2_IP:2379"
      listen-peer-urls: "https://$CP2_IP:2380"
      initial-advertise-peer-urls: "https://$CP2_IP:2380"
      initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380"
      initial-cluster-state: existing
      name: $CP2_HOSTNAME
    serverCertSANs:
      - $CP2_HOSTNAME
      - $CP2_IP
    peerCertSANs:
      - $CP2_HOSTNAME
      - $CP2_IP
networking:
    podSubnet: "10.244.0.0/16"
EOF

将证书移到相应目录

mkdir -p /etc/kubernetes/pki/etcd
mv ~/ca.crt /etc/kubernetes/pki/
mv ~/ca.key /etc/kubernetes/pki/
mv ~/sa.pub /etc/kubernetes/pki/
mv ~/sa.key /etc/kubernetes/pki/
mv ~/front-proxy-ca.crt /etc/kubernetes/pki/
mv ~/front-proxy-ca.key /etc/kubernetes/pki/
mv ~/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv ~/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv ~/admin.conf /etc/kubernetes/admin.conf

配置并启动kubelet

kubeadm alpha phase certs all --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config ~/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config ~/kubeadm-config.yaml
systemctl start kubelet

加入etcd集群

kubectl exec -n kube-system etcd-${CP1_HOSTNAME} -- etcdctl \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--endpoints=https://${CP1_IP}:2379 \
member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
kubeadm alpha phase etcd local --config ~/kubeadm-config.yaml

将节点配置为master

kubeadm alpha phase kubeconfig all --config ~/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config ~/kubeadm-config.yaml
kubeadm alpha phase mark-master --config ~/kubeadm-config.yaml

7. 加入k8s集群(第三台)
在第三台Master上执行以下命令

cat << EOF > ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
apiServerCertSANs:
- "$LOAD_BALANCER_DNS"
api:
    controlPlaneEndpoint: "$LOAD_BALANCER_DNS:$LOAD_BALANCER_PORT"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP3_IP:2379"
      advertise-client-urls: "https://$CP3_IP:2379"
      listen-peer-urls: "https://$CP3_IP:2380"
      initial-advertise-peer-urls: "https://$CP3_IP:2380"
      initial-cluster: "$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380,$CP3_HOSTNAME=https://$CP3_IP:2380"
      initial-cluster-state: existing
      name: $CP3_HOSTNAME
    serverCertSANs:
      - $CP3_HOSTNAME
      - $CP3_IP
    peerCertSANs:
      - $CP3_HOSTNAME
      - $CP3_IP
networking:
    podSubnet: "10.244.0.0/16"
EOF

将证书移到相应目录

mkdir -p /etc/kubernetes/pki/etcd
mv ~/ca.crt /etc/kubernetes/pki/
mv ~/ca.key /etc/kubernetes/pki/
mv ~/sa.pub /etc/kubernetes/pki/
mv ~/sa.key /etc/kubernetes/pki/
mv ~/front-proxy-ca.crt /etc/kubernetes/pki/
mv ~/front-proxy-ca.key /etc/kubernetes/pki/
mv ~/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv ~/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv ~/admin.conf /etc/kubernetes/admin.conf

配置并启动kubelet

kubeadm alpha phase certs all --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config ~/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config ~/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config ~/kubeadm-config.yaml
systemctl start kubelet

加入etcd集群

kubectl exec -n kube-system etcd-${CP1_HOSTNAME} -- etcdctl \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--endpoints=https://${CP1_IP}:2379 \
member add ${CP3_HOSTNAME} https://${CP3_IP}:2380
kubeadm alpha phase etcd local --config ~/kubeadm-config.yaml

将节点配置为master

kubeadm alpha phase kubeconfig all --config ~/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config ~/kubeadm-config.yaml
kubeadm alpha phase mark-master --config ~/kubeadm-config.yaml

8. 配置网络插件(以flannel为例)

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

9. 将worker节点加入集群
下载kubernetes镜像

docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh

安装kubelet、kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.11.1 kubeadm-1.11.1
systemctl enable kubelet

将worker加入k8s集群,使用在初始化第一台Master时生成的命令,如

kubeadm join k8s.test.local:8443 --token bqnani.kwxe3y34vy22xnhm --discovery-token-ca-cert-hash sha256:b6146fea7a63d3a66e406c12f55f8d99537db99880409939e4aba206300e06cc

10. 确认集群运行状态
确认etcd运行状态

docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd-amd64:3.2.18 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://$CP1_IP:2379 cluster-health

若正常应返回类似如下结果

member 4fea45cc5063c213 is healthy: got healthy result from https://192.168.1.101:2379
member 963074f50ce23d9a is healthy: got healthy result from https://192.168.1.102:2379
member 9a186be7d1ea4bbe is healthy: got healthy result from https://192.168.1.103:2379

确认k8s集群nodes运行状态

kubectl get nodes

若正常应返回类似如下结果

NAME                     STATUS    ROLES     AGE       VERSION
k8s-master1.test.local   Ready     master    1d        v1.11.1
k8s-master2.test.local   Ready     master    1d        v1.11.1
k8s-master3.test.local   Ready     master    1d        v1.11.1
k8s-worker1.test.local   Ready         1d        v1.11.1
k8s-worker2.test.local   Ready         1d        v1.11.1
k8s-worker3.test.local   Ready         1d        v1.11.1

确认k8s集群pods运行状态

kubectl get pods -n kube-system

若正常应返回类似如下结果,其中etcd、kube-apiserver、kube-controller-manager和kube-scheduler应该各有三个,coredns默认有两个,kube-proxy和kube-flannel的数量应和node数量一致,本示例中为6个

NAMESPACE       NAME                                            READY     STATUS    RESTARTS   AGE
kube-system     coredns-78fcdf6894-j6cpl                        1/1       Running   0          1d
kube-system     coredns-78fcdf6894-kgqp7                        1/1       Running   0          1d
kube-system     etcd-k8s-master1.test.local                     1/1       Running   0          1d
kube-system     etcd-k8s-master2.test.local                     1/1       Running   0          1d
kube-system     etcd-k8s-master3.test.local                     1/1       Running   0          1d
kube-system     kube-apiserver-k8s-master1.test.local           1/1       Running   0          1d
kube-system     kube-apiserver-k8s-master2.test.local           1/1       Running   0          1d
kube-system     kube-apiserver-k8s-master3.test.local           1/1       Running   0          1d
kube-system     kube-controller-manager-k8s-master1.test.local  1/1       Running   0          1d
kube-system     kube-controller-manager-k8s-master2.test.local  1/1       Running   0          1d
kube-system     kube-controller-manager-k8s-master3.test.local  1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-2r7jp                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-d5vlw                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-qd5x6                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-wzl26                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-xklr6                     1/1       Running   0          1d
kube-system     kube-flannel-ds-amd64-4jr5v                     1/1       Running   0          1d
kube-system     kube-proxy-8gmdd                                1/1       Running   0          1d
kube-system     kube-proxy-8rs8m                                1/1       Running   0          1d
kube-system     kube-proxy-pm6tq                                1/1       Running   0          1d
kube-system     kube-proxy-shsjv                                1/1       Running   0          1d
kube-system     kube-proxy-vj5gk                                1/1       Running   0          1d
kube-system     kube-proxy-wd8xj                                1/1       Running   0          1d
kube-system     kube-scheduler-k8s-master1.test.local           1/1       Running   0          1d
kube-system     kube-scheduler-k8s-master2.test.local           1/1       Running   0          1d
kube-system     kube-scheduler-k8s-master3.test.local           1/1       Running   0          1d

使用docker部署3节点etcd集群

在每个节点输入以下变量,根据实际情况修改主机名和IP
为了将etcd API暴露给docker主机外的客户端,必须使用docker inspect命令获得容器的IP来配置,这里使用了–net=host主机网络的方式来简化这一步骤。

ETCD_VERSION=latest
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node1
NAME_2=etcd-node2
NAME_3=etcd-node3
HOST_1=192.168.1.101
HOST_2=192.168.1.102
HOST_3=192.168.1.103
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/var/lib/etcd

在节点1输入以下命令

THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
docker run -d \
  --net=host \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name ${THIS_NAME} \
  --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
  --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
  --initial-cluster ${CLUSTER} \
  --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

在节点2输入以下命令

THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
docker run -d \
  --net=host \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name ${THIS_NAME} \
  --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
  --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
  --initial-cluster ${CLUSTER} \
  --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

在节点3输入以下命令

THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
docker run -d \
  --net=host \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name ${THIS_NAME} \
  --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
  --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
  --initial-cluster ${CLUSTER} \
  --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

全部执行完后使用以下命令查看集群节点信息

docker exec etcd /usr/local/bin/etcdctl --endpoints=http://${HOST_1}:2379 member list
20c4dbd9ca01c9fc: etcd-node-2 peerURLs=http://192.168.1.103:2380 clientURLs=http://192.168.1.103:2379 isLeader=false
52b6c5eaedead574: etcd-node-1 peerURLs=http://192.168.1.102:2380 clientURLs=http://192.168.1.102:2379 isLeader=false
7623946005cf410f: etcd-node-0 peerURLs=http://192.168.1.101:2380 clientURLs=http://192.168.1.101:2379 isLeader=true

更改Docker的lvm挂载方式从loop-vm至direct-lvm

Docker安装后默认在/var/lib/docker/devicemapper/devicemapper目录下生成data和metadata两个文件用于存放docker的数据,然而这种默认的loop-lvm挂载方式不适合生产环境使用,并且你也会收到docker的如下提示,生产环境应改为direct-lvm方式。
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

以下是更改docker device mapper的方法

方法一: 允许Docker配置direct-lvm模式
编辑/etc/docker/daemon.json文件(如果该文件不存在则进行创建),参数表如下
dm.directlvm_device:块设备路径(必须)
dm.thinp_percent:块设备使用比率,默认95
dm.thinp_metapercent:元数据使用比率,默认1
dm.thinp_autoextend_threshold:自动扩容阈值,默认80
dm.thinp_autoextend_percent:自动扩容比率,20
dm.directlvm_device_force:强制格式化设备,默认false
以下为示例

{
  "storage-driver": "devicemapper",
  "storage-opts": [
    "dm.directlvm_device=/dev/xdf",
    "dm.thinp_percent=95",
    "dm.thinp_metapercent=1",
    "dm.thinp_autoextend_threshold=80",
    "dm.thinp_autoextend_percent=20",
    "dm.directlvm_device_force=false"
  ]
}

重启docker以生效

sudo systemctl restart docker

方法二: 手工配置direct-lvm模式
停止docker服务

sudo systemctl stop docker

安装必要的软件包

sudo yum install -y device-mapper-persistent-data lvm2

创建pv,示例中/dev/sdb需改为对应物理卷

sudo pvcreate /dev/sdb

创建用于docker的vg

sudo vgcreate vgdocker /dev/sdb

创建两个lv,用于data和metadata,最后两个参数指定了该thin pool允许自动扩容能到达的VG百分比

sudo lvcreate --wipesignatures y -n thinpool vgdocker -l 95%VG
sudo lvcreate --wipesignatures y -n thinpoolmeta vgdocker -l 1%VG

将lv转换为thin pool

sudo lvconvert -y --zero n -c 512K \
--thinpool vgdocker/thinpool \
--poolmetadata vgdocker/thinpoolmeta

修改自动扩容配置,其中thin_pool_autoextend_threshold为自动扩容阈值,thin_pool_autoextend_percent为每次扩容的比率

sudo cat <<EOF > /etc/lvm/profile/vgdocker-thinpool.profile
activation {
  thin_pool_autoextend_threshold=80
  thin_pool_autoextend_percent=20
}
EOF

应用LVM profile

sudo lvchange --metadataprofile vgdocker-thinpool vgdocker/thinpool

启动lv监控来实现自动扩容

sudo lvs -o+seg_monitor

移动旧docker数据以便恢复

mkdir /var/lib/docker.bak
mv /var/lib/docker/* /var/lib/docker.bak

修改/etc/docker/daemon.json(如果该文件不存在则进行创建)

{
    "storage-driver": "devicemapper",
    "storage-opts": [
    "dm.thinpooldev=/dev/mapper/vgdocker-thinpool",
    "dm.use_deferred_removal=true",
    "dm.use_deferred_deletion=true"
    ]
}

启动docker服务

sudo systemctl start docker

确认已使用pool

docker info | grep Pool

确认无问题后删除旧docker数据

rm -rf /var/lib/docker.bak

逻辑卷的扩容
扩展vg,/dev/sdc为新加的物理卷

sudo vgextend docker /dev/sdc

扩展lv,并通过docker info命令确认扩容情况

sudo lvextend -l+100%FREE -n vgdocker/thinpool
docker info

重启后的激活
若系统重启后发现docker无法启动,能需要使用以下命令重新激活lv

sudo lvchange -ay vgdocker/thinpool

重新安装kubernetes时需要执行的命令

执行以下命令来清除kubernetes的配置以及桥接网卡等

kubeadm reset -f
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker

使用kubeadm部署单master节点Kubernetes v1.11.1

上次安装Kubernetes v1.11.0遇到问题,kubeadm init总过不去,现在v1.11.1版本发布了,试了一下,可以正常安装了,安装过程与v1.10.5基本相同,可以参考原文章Kubernetes v1.10.5安装
1. 主机配置
简单写在一起了,包含关闭firewall、swap、selinux,开启内核参数,启用bash补全

sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo setenforce 0
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
yum install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc

2. 下载Kubernetes镜像
从mirrorgooglecontainers源下载镜像(CoreDNS已正式上线,代替原来kube-dns组件)

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
docker pull quay.io/coreos/flannel:v0.10.0-amd64

将镜像标记为k8s.gcr.io的名称

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

删除无用镜像名

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
docker rmi coredns/coredns:1.1.3

安装、配置kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1
systemctl enable kubelet
systemctl start kubelet

3. 初始化k8s集群
指定kubernetes-version版本,由于有墙。token-ttl默认有效为24小时,改为0为永久有效。设置pod-network-cidr为flannel做准备。

kubeadm init --kubernetes-version v1.11.1 --token-ttl 0 \
--pod-network-cidr 10.244.0.0/16

复制admin配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置flannel网络插件

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

4. 确认kubernetes运行状态

kubectl get pods --all-namespaces

所有容器都运行时即部署完成

NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-j2xwq                  1/1       Running   0          1m
kube-system   coredns-78fcdf6894-sn28d                  1/1       Running   0          2m
kube-system   etcd-k8s1.test.local                      1/1       Running   0          1m
kube-system   kube-apiserver-k8s1.test.local            1/1       Running   0          1m
kube-system   kube-controller-manager-k8s1.test.local   1/1       Running   0          1m
kube-system   kube-flannel-ds-amd64-zkgkb               1/1       Running   0          1m
kube-system   kube-proxy-7r8zc                          1/1       Running   0          2m
kube-system   kube-scheduler-k8s1.test.local            1/1       Running   0          1m

使用kubeadm部署单master节点Kubernetes v1.10.5

安装环境: CentOS 7.5,Docker CE 17.03
(本想写1.11.0版本的安装,由于遇到问题,退到1.10.5版本了)

1. 主机配置
关闭防火墙

sudo systemctl stop firewalld
sudo systemctl disable firewalld

关闭swap

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

关闭SELinux

sudo setenforce 0

设置内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

添加host记录,如有dns记录则不需要

cat >> /etc/hosts << EOF
192.168.1.101	k8s1 k8s1.test.local
192.168.1.102	k8s2 k8s2.test.local
192.168.1.103	k8s3 k8s3.test.local
EOF

2. 下载Kubernetes镜像
从mirrorgooglecontainers源下载镜像

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.10.5
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.10.5
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.10.5
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.10.5
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.8
docker pull mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.8
docker pull quay.io/coreos/etcd:v3.1.12
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker pull coredns/coredns:1.0.6

将镜像标记为k8s.gcr.io的名称

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.10.5 k8s.gcr.io/kube-apiserver-amd64:v1.10.5
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.10.5 k8s.gcr.io/kube-scheduler-amd64:v1.10.5
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.10.5 k8s.gcr.io/kube-proxy-amd64:v1.10.5
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.10.5 k8s.gcr.io/kube-controller-manager-amd64:v1.10.5
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag quay.io/coreos/etcd:v3.1.12 k8s.gcr.io/etcd-amd64:3.1.12

删除无用镜像名

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh

安装、配置kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.10.5 kubeadm-1.10.5 kubectl-1.10.5
systemctl enable kubelet
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl start kubelet

安装bash自动补全

yum install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc

3. 初始化k8s集群

kubeadm init --kubernetes-version v1.10.5 --token-ttl 0 --pod-network-cidr 10.244.0.0/16

指定kubernetes-version版本,由于有墙
token-ttl默认有效为24小时,改为0为永久有效
设置pod-network-cidr为flannel做准备

可以在初始化集群时使用CoreDNS代替kube-dns

kubeadm init --kubernetes-version v1.10.5 --token-ttl 0 \
--pod-network-cidr 10.244.0.0/16 --feature-gates CoreDNS=true
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是root用户也可以使用以下命令

export KUBECONFIG=/etc/kubernetes/admin.conf

配置flannel网络插件

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

4. 确认kubernetes运行状态

kubectl get pods --all-namespaces

所有容器都运行时即部署完成

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-k8s1                               1/1       Running   1          6m
kube-system   kube-apiserver-k8s1                     1/1       Running   1          6m
kube-system   kube-controller-manager-k8s1            1/1       Running   1          6m
kube-system   kube-dns-86f4d74b45-lmcqv               3/3       Running   3          6m
kube-system   kube-flannel-ds-amd64-g6g66             1/1       Running   1          6m
kube-system   kube-proxy-rqnhh                        1/1       Running   1          6m
kube-system   kube-scheduler-k8s1                     1/1       Running   1          6m

5. 加入Node节点
Node节点的配置与Master基本相同(参见1、2节),只是所需的docker image少一些,只需要kube-proxy-amd64:v1.10.5、pause-amd64:3.1和flannel:v0.10.0-amd64(如果需要)

docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.10.5
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.10.5 k8s.gcr.io/kube-proxy-amd64:v1.10.5
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh

然后使用kubeadm join加入k8s集群,该命令会在Master执行kubeadm init最后生成,如果没有记录下来可用以下命令生成

kubeadm token create --print-join-command

再使用kubectl get node命令确认节点加入情况

NAME              STATUS    ROLES     AGE       VERSION
k8s1.test.local   Ready     master    5m        v1.10.5
k8s2.test.local   Ready     <none>    3m        v1.10.5
k8s3.test.local   Ready     <none>    3m        v1.10.5

CentOS7安装4.17内核并启动BBR

安装4.17内核:

首先导入elrepo安装源

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

安装4.17内核

sudo yum --enablerepo=elrepo-kernel install -y kernel-ml

确认安装结果

rpm -qa | grep kernel-ml

列出当前grub2启动菜单的所有项

awk -F\' '$1=="menuentry " {print i++ ":" $2}' /etc/grub2.cfg

继续阅读

CentOS7安装配置DRBD9

安装:
首先导入elrepo安装源

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

安装DRBD软件包

yum install -y drbd90-utils kmod-drbd90

启动DRBD内核模块

modprobe drbd
echo drbd > /etc/modules-load.d/drbd.conf

继续阅读