使用kubeadm部署单master节点Kubernetes v1.11.1

上次安装Kubernetes v1.11.0遇到问题,kubeadm init总过不去,现在v1.11.1版本发布了,试了一下,可以正常安装了,安装过程与v1.10.5基本相同,可以参考原文章Kubernetes v1.10.5安装
1. 主机配置
简单写在一起了,包含关闭firewall、swap、selinux,开启内核参数,启用bash补全

sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo setenforce 0
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
yum install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc

2. 下载Kubernetes镜像
从mirrorgooglecontainers源下载镜像(CoreDNS已正式上线,代替原来kube-dns组件)

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
docker pull quay.io/coreos/flannel:v0.10.0-amd64

将镜像标记为k8s.gcr.io的名称

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

删除无用镜像名

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
docker rmi coredns/coredns:1.1.3

安装、配置kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1
systemctl enable kubelet
systemctl start kubelet

3. 初始化k8s集群
指定kubernetes-version版本,由于有墙。token-ttl默认有效为24小时,改为0为永久有效。设置pod-network-cidr为flannel做准备。

kubeadm init --kubernetes-version v1.11.1 --token-ttl 0 \
--pod-network-cidr 10.244.0.0/16

复制admin配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置flannel网络插件

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

4. 确认kubernetes运行状态

kubectl get pods --all-namespaces

所有容器都运行时即部署完成

NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-j2xwq                  1/1       Running   0          1m
kube-system   coredns-78fcdf6894-sn28d                  1/1       Running   0          2m
kube-system   etcd-k8s1.test.local                      1/1       Running   0          1m
kube-system   kube-apiserver-k8s1.test.local            1/1       Running   0          1m
kube-system   kube-controller-manager-k8s1.test.local   1/1       Running   0          1m
kube-system   kube-flannel-ds-amd64-zkgkb               1/1       Running   0          1m
kube-system   kube-proxy-7r8zc                          1/1       Running   0          2m
kube-system   kube-scheduler-k8s1.test.local            1/1       Running   0          1m

使用kubeadm部署单master节点Kubernetes v1.10.5

安装环境: CentOS 7.5,Docker CE 17.03
(本想写1.11.0版本的安装,由于遇到问题,退到1.10.5版本了)

1. 主机配置
关闭防火墙

sudo systemctl stop firewalld
sudo systemctl disable firewalld

关闭swap

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

关闭SELinux

sudo setenforce 0

设置内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

添加host记录,如有dns记录则不需要

cat >> /etc/hosts << EOF
192.168.1.101	k8s1 k8s1.test.local
192.168.1.102	k8s2 k8s2.test.local
192.168.1.103	k8s3 k8s3.test.local
EOF

2. 下载Kubernetes镜像
从mirrorgooglecontainers源下载镜像

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.10.5
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.10.5
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.10.5
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.10.5
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.8
docker pull mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.8
docker pull quay.io/coreos/etcd:v3.1.12
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker pull coredns/coredns:1.0.6

将镜像标记为k8s.gcr.io的名称

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.10.5 k8s.gcr.io/kube-apiserver-amd64:v1.10.5
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.10.5 k8s.gcr.io/kube-scheduler-amd64:v1.10.5
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.10.5 k8s.gcr.io/kube-proxy-amd64:v1.10.5
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.10.5 k8s.gcr.io/kube-controller-manager-amd64:v1.10.5
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag quay.io/coreos/etcd:v3.1.12 k8s.gcr.io/etcd-amd64:3.1.12

删除无用镜像名

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh

安装、配置kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.10.5 kubeadm-1.10.5 kubectl-1.10.5
systemctl enable kubelet
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl start kubelet

安装bash自动补全

yum install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc

3. 初始化k8s集群

kubeadm init --kubernetes-version v1.10.5 --token-ttl 0 --pod-network-cidr 10.244.0.0/16

指定kubernetes-version版本,由于有墙
token-ttl默认有效为24小时,改为0为永久有效
设置pod-network-cidr为flannel做准备

可以在初始化集群时使用CoreDNS代替kube-dns

kubeadm init --kubernetes-version v1.10.5 --token-ttl 0 \
--pod-network-cidr 10.244.0.0/16 --feature-gates CoreDNS=true
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是root用户也可以使用以下命令

export KUBECONFIG=/etc/kubernetes/admin.conf

配置flannel网络插件

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

4. 确认kubernetes运行状态

kubectl get pods --all-namespaces

所有容器都运行时即部署完成

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-k8s1                               1/1       Running   1          6m
kube-system   kube-apiserver-k8s1                     1/1       Running   1          6m
kube-system   kube-controller-manager-k8s1            1/1       Running   1          6m
kube-system   kube-dns-86f4d74b45-lmcqv               3/3       Running   3          6m
kube-system   kube-flannel-ds-amd64-g6g66             1/1       Running   1          6m
kube-system   kube-proxy-rqnhh                        1/1       Running   1          6m
kube-system   kube-scheduler-k8s1                     1/1       Running   1          6m

5. 加入Node节点
Node节点的配置与Master基本相同(参见1、2节),只是所需的docker image少一些,只需要kube-proxy-amd64:v1.10.5、pause-amd64:3.1和flannel:v0.10.0-amd64(如果需要)

docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.10.5
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.10.5 k8s.gcr.io/kube-proxy-amd64:v1.10.5
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh

然后使用kubeadm join加入k8s集群,该命令会在Master执行kubeadm init最后生成,如果没有记录下来可用以下命令生成

kubeadm token create --print-join-command

再使用kubectl get node命令确认节点加入情况

NAME              STATUS    ROLES     AGE       VERSION
k8s1.test.local   Ready     master    5m        v1.10.5
k8s2.test.local   Ready     <none>    3m        v1.10.5
k8s3.test.local   Ready     <none>    3m        v1.10.5

CentOS7安装4.17内核并启动BBR

安装4.17内核:

首先导入elrepo安装源

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

安装4.17内核

sudo yum --enablerepo=elrepo-kernel install -y kernel-ml

确认安装结果

rpm -qa | grep kernel-ml

列出当前grub2启动菜单的所有项

awk -F\' '$1=="menuentry " {print i++ ":" $2}' /etc/grub2.cfg

继续阅读

CentOS7安装配置DRBD9

安装:
首先导入elrepo安装源

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

安装DRBD软件包

yum install -y drbd90-utils kmod-drbd90

启动DRBD内核模块

modprobe drbd
echo drbd > /etc/modules-load.d/drbd.conf

继续阅读

VMware Photon OS安装配置

VMware Photon OS是VMware公司制作的Container Host系统,由于Photon OS可以直接部署在vSphere平台上,使得系统可以去除大量不必要的硬件驱动,让系统变得十分精简高效。以下将对Photon OS在vSphere平台上的安装配置做一个简单探索。

Photon OS下载页面https://github.com/vmware/photon/wiki/Downloading-Photon-OS,当前版本为2.0。
下载介质分多种版本,Full ISO包含了所有的包,可用于完整安装;OVA with virtual hardware v11用于vSphere 6.0平台;OVA with virtual hardware v13用于vSphere 6.5、6.7平台;其他镜像(略)可用于VMware Workstation、VMware Funsion、Amazon AWS、Microsoft Azure、Google Compute Engine等工作站和云平台。
继续阅读

CentOS Atomic下载

CentOS Atomic主机是一个专为执行Docker容器而设的轻量操作系统,它创建自标准的CentOS 7组件,并追随Red Hat企业级Linux Atomic主机的组件版本。
更多内容可查看Atomic项目主页http://www.projectatomic.io/
以下下载地址来源于CentOS网站:https://wiki.centos.org/SpecialInterestGroup/Atomic/Download
未压缩qcow2格式下载链接
xz压缩qcow2格式下载链接
gz压缩qcow2格式下载链接
ISO镜像下载链接

OpenShift Origin 3.9离线部署方法

OpenShift Origin是一款开源的容器云平台,对应的商业版本是Red Hat OpenShift。OpenShift以Docker为容器运行环境、K8S为容器编排,加上一系列自动化工具构成了整个平台。

OpenShift安装先决条件:Docker
修改Docker配置文件

cat << EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://registry.docker-cn.com"],
  "insecure-registries": ["172.30.0.0/16"]
}
EOF
systemctl restart docker

从GitHub下载最新程序文件:https://github.com/openshift/origin/releases
本文将以3.9版本为基础,因此下载客户端:openshift-origin-client-tools-v3.9.0-191fece-linux-64bit.tar.gz
将下载的文件上传至服务器,然后解压:

tar -xvzf openshift-origin-client-tools-v3.9.0-191fece-linux-64bit.tar.gz
cp openshift-origin-client-tools-v3.9.0-191fece-linux-64bit/oc /usr/local/bin

执行启动命令,192.168.1.41为服务器IP,启动后会自动下载所需的镜像文件

oc cluster up --public-hostname=192.168.1.41

启动完成后用浏览器登录https://192.168.1.41:8443即可范围系统,默认用户名和密码都是dev

使用Docker部署单节点etcd

为了能使Docker外的系统访问到etcd服务,需要通过docker inspect获得容器的IP。或者也可以通过参数–net=host使容器使用主机的网络。

export NODE1=192.168.1.21
docker run \
  -p 2379:2379 \
  -p 2380:2380 \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd quay.io/coreos/etcd:latest \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name node1 \
  --initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
  --advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
  --initial-cluster node1=http://${NODE1}:2380
etcdctl --endpoints=http://${NODE1}:2379 member list

将容器作为systemd服务来运行

编写/etc/systemd/system/myapp.service内容如下

[Unit]
Description=MyApp
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill busybox1
ExecStartPre=-/usr/bin/docker rm busybox1
ExecStartPre=/usr/bin/docker pull busybox
ExecStart=/usr/bin/docker run --name busybox1 busybox /bin/sh -c "trap 'exit 0' INT TERM; while true; do echo Hello World; sleep 1; done"

[Install]
WantedBy=multi-user.target

配置作为服务自动启动
sudo systemctl enable myapp.service
sudo systemctl start myapp.service
检查服务运行状态
journalctl -f -u hello.service

CoreOS ISO引导安装

1. 从官网下载最新CoreOS ISO文件
https://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso

2. 修改SSH配置
使用ISO文件引导Live CD,复制sshd_config文件
cd /etc/ssh
sudo mv sshd_config{,.bak}
sudo cp /usr/share/ssh/sshd_config .
sudo vi sshd_config
增加一行PermitRootLogin yes
sudo systemctl restart sshd
sudo passwd root

3. 通过SSH将ignition.json复制到服务器
ignition.json文件内容如下

{
  "ignition": {
    "config": {},
    "timeouts": {},
    "version": "2.1.0"
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "core",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq......."
        ]
      }
    ]
  },
  "storage": {},
  "systemd": {}
}

4. 将CoreOS安装到磁盘
sudo coreos-install -d /dev/sda -C stable -i ~/ignition.json