作者归档:daban

VMware NSX Application Platform部署(未完)

VMware NSX Data Center 3.2版本之后有一部分组件将会以容器的方式来部署,本文会说明相关的部署方式。我们以Tanzu Community Edition(TCE)为基础开始后续的部署工作,没有TCE的可以看下这里。
VMware Tanzu社区版在VMware vSphere部署

首次翻看NSX Application Platform (NAPP)的安装说明会觉得头很大,架构很复杂,又是Tanzu又是AVI,典型的结构如下图。为了能最简单地用上NSX的附加功能,本文将以最精简的架构来部署NAPP。

继续阅读

基于kube-vip创建kubernetes高可用集群

以前写过使用keepalived来创建kubernetes高可用集群,现在有了kube-vip方案,因此将安装方法更新下:

所使用的环境如下:
Ubuntu Server 20.04 LTS (自从CentOS变成CentOS Stream后就转用Debian/Ubuntu了)
containerd 1.5.5 (k8s 1.24之后就不再支持docker了,因此改用containerd)
Kubernetes v1.23.5
kube-vip v0.4.3 (这里为了简单部署使用L2 ARP方式)

继续阅读

Ubuntu上安装MicroK8s

什么是MicroK8s

MicroK8s是一款功能强大,重量轻,可靠的生产型Kubernetes衍生版。 它是一种企业级Kubernetes发行版,具有较小的磁盘和内存占用空间,同时提供开箱即用的生产级附加组件,如Istio,Knative,Grafana,Cilium等。MicroK8s是最小,最快的多节点Kubernetes。
非常适合使用在小型VPS或者IoT设备,如树莓派等。不建议使用MicroK8s来学习Kubernetes。
官方网站:https://microk8s.io

继续阅读

Zabbix 5.0安装

OS: CentOS 8.1.1911
DB: MariaDB 10
Web: Nginx

关闭防火墙及SELinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
systemctl disable --now firewalld
reboot

列出可用的MariaDB模块流

dnf module list mariadb

输出显示可用的mariadb版本,目前为10.3版本

CentOS-8 - AppStream
Name                Stream               Profiles                               Summary                  
mariadb             10.3 [d]             client, server [d], galera             MariaDB Module           

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

安装MariaDB

sudo dnf install -y mariadb-server

启动MariaDB

sudo systemctl enable --now mariadb

初始化MariaDB数据库

mysql_secure_installation

配置Zabbix安装源

sudo rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/8/x86_64/zabbix-release-5.0-1.el8.noarch.rpm
sed -i 's#http://repo.zabbix.com#https://mirrors.aliyun.com/zabbix#' /etc/yum.repos.d/zabbix.repo
sudo dnf clean all

安装Zabbix相应组件

sudo dnf install -y zabbix-server-mysql zabbix-web-mysql zabbix-nginx-conf zabbix-agent

初始化Zabbix数据库(会提示输入密码)

mysql -uroot -p
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> create user zabbix@localhost identified by 'zabbix';
mysql> grant all privileges on zabbix.* to zabbix@localhost;
mysql> quit;

初始化表结构

zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -uzabbix -pzabbix zabbix

编辑/etc/zabbix/zabbix_server.conf文件,修改数据库密码

DBPassword=zabbix

配置PHP,修改/etc/nginx/conf.d/zabbix.conf,取消listen及server_name两行的注释

listen 80;
server_name example.com;

修改PHP时区,修改/etc/php-fpm.d/zabbix.conf

php_value[date.timezone] = Asia/Shanghai

启动服务

systemctl enable zabbix-server zabbix-agent nginx php-fpm --now

CentOS安装Ceph(使用Ceph-Deploy)

————————————– 预先配置 ————————————–
安装EPEL源

sudo yum install -y epel-release yum-plugin-priorities python2-pip

安装ceph源,其中mimic是当前ceph版本,如有更新的可以改为更新的版本名

cat << EOF > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
EOF

更新系统并安装ceph-deploy工具

sudo yum update -y
sudo yum install -y ceph-deploy

安装配置时间服务

yum install chrony
systemctl enable chronyd
systemctl start chronyd

添加ceph用户并设置密码

sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

配置ssh免密登录

ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa
ssh-copy-id ceph@ceph-node1
ssh-copy-id ceph@ceph-node2
ssh-copy-id ceph@ceph-node3

开启防火墙端口

monitor节点开启ceph-mon
sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
OSD和MDS节点开启ceph
sudo firewall-cmd --zone=public --add-service=ceph --permanent
重新载入防火墙配置
sudo firewall-cmd --reload

关闭SELinux

sudo setenforce 0
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config

————————————– 部署集群 ————————————–
本示例节点的信息如下:

+------------+---------------+---------+
|  Hostname  |   IP Address  |   Role  |
+------------+---------------+---------+
| ceph-node1 | 192.168.1.101 | mon,osd |
+------------+---------------+---------+
| ceph-node2 | 192.168.1.102 | osd     |
+------------+---------------+---------+
| ceph-node3 | 192.168.1.103 | osd     |
+------------+---------------+---------+

创建目录以供ceph-deploy工具输出用

mkdir my-cluster && cd my-cluster

若配置中间发生错误,可使用以下命令清空配置后重来

ceph-deploy purge ceph-node1 ceph-node2 ceph-node3
ceph-deploy purgedata ceph-node1 ceph-node2 ceph-node3
ceph-deploy forgetkeys
rm ceph.*

创建集群,该命令执行完后会生成ceph.conf文件

ceph-deploy new ceph-node1

如果node有多个IP需在ceph.conf中指定对外IP

在[global]区域中加入如下信息,如:
public network = 192.168.1.0/24

安装ceph软件

ceph-deploy install ceph-node1 ceph-node2 ceph-node3

初始化monitor以及生成key

ceph-deploy mon create-initial

复制配置文件和密码到其他管理节点,这里将复制到所有节点以便操作

ceph-deploy admin ceph-node1 ceph-node2 ceph-node3

创建manager守护进程
ceph-deploy mgr create ceph-node1
添加OSD,命令为ceph-deploy osd create –data {device} {ceph-node},示例中/dev/sdb每个节点可用的磁盘
如果主机提供的不是块设备而是LVM,则需要改为vg_name/lv_name

ceph-deploy osd create --data /dev/sdb ceph-node1
ceph-deploy osd create --data /dev/sdb ceph-node2
ceph-deploy osd create --data /dev/sdb ceph-node3

检查集群健康状态

ssh ceph-node1 sudo ceph health
ssh ceph-node1 sudo ceph -s

————————————– 扩展集群 ————————————–
在前面初始化集群里,我们使用了node1作为monitor节点,但如果node1挂了的话整个集群就不可用了。
所以一般高可用部署中monitor至少会部署三个节点(必须是奇数个),因此我们调整了整个集群的架构如下:

+------------+---------------+-----------------+
|  Hostname  |   IP Address  |       Role      |
+------------+---------------+-----------------+
| ceph-node1 | 192.168.1.101 | mon,osd,mgr,mds |
+------------+---------------+-----------------+
| ceph-node2 | 192.168.1.102 | mon,osd         |
+------------+---------------+-----------------+
| ceph-node3 | 192.168.1.103 | mon,osd         |
+------------+---------------+-----------------+

为了能使用CephFS,至少需要一台metadata服务器,这里将node1设置为metadata服务器

ceph-deploy mds create ceph-node1

增加monitor

ceph-deploy mon add ceph-node2 ceph-node3

确认monitor仲裁同步状态

ceph quorum_status --format json-pretty

增加ceph manager。manager使用主/备状态工作,当主节点故障时,备用节点会接管主节点

ceph-deploy mgr create ceph-node2 ceph-node3

确认备用manager状态

ssh ceph-node1 sudo ceph -s

Kubernetes社区的Ingress Controller部署

该NGINX Ingress Controller为Kubernetes社区制作的(https://github.com/kubernetes/ingress-nginx),与之前写的NGINX公司制作的Ingress Controller(https://github.com/nginxinc/kubernetes-ingress)配置上不一样

安装非常的简单,执行下面的命令即可

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

如果不是部署在云上,可以使用以下命令开启NodePort

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

安装完成后使用以下命令检测ingress容器状态

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch

使用以下命令可检测所安装的版本

POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version

典型的Ingress配置文件如下

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1
          servicePort: 80
  - host: bar.foo.com
    http:
      paths:
      - backend:
          serviceName: s2
          servicePort: 80

Dashboard的Ingress配置,k8s-dashboard-secret需先创建

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k8s-dashboard
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/secure-backends: "true"

spec:
  tls:
   - secretName: k8s-dashboard-secret
  rules:
   - http:
      paths:
      - path: /dashboard
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

使用kubeadm升级高可用Kubernetes集群

本教程将演示使用kubeadm将3台master的kubernetes集群从v1.11.1版本升级至v1.12.1版本
下载最新kubernetes镜像(如有梯子可以跳过),若要升级后续版本则将版本号改为对应版本号,worker节点只需kube-proxy

export VERSION=v1.12.1
docker pull mirrorgooglecontainers/kube-apiserver:${VERSION}
docker pull mirrorgooglecontainers/kube-scheduler:${VERSION}
docker pull mirrorgooglecontainers/kube-proxy:${VERSION}
docker pull mirrorgooglecontainers/kube-controller-manager:${VERSION}
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.2

docker tag mirrorgooglecontainers/kube-apiserver:${VERSION} k8s.gcr.io/kube-apiserver:${VERSION}
docker tag mirrorgooglecontainers/kube-scheduler:${VERSION} k8s.gcr.io/kube-scheduler:${VERSION}
docker tag mirrorgooglecontainers/kube-proxy:${VERSION} k8s.gcr.io/kube-proxy:${VERSION}
docker tag mirrorgooglecontainers/kube-controller-manager:${VERSION} k8s.gcr.io/kube-controller-manager:${VERSION}
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "$1":"$2}' | sh
docker rmi coredns/coredns:1.2.2

安装新版kubeadm

export VERSION=1.12.1
yum install -y kubeadm-${VERSION}

在第一个master节点上执行如下命令

kubeadm upgrade plan

会得到如下结果

[upgrade/versions] Latest version in the v1.11 series: v1.12.1

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     6 x v1.11.1   v1.12.1

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.1   v1.12.1
Controller Manager   v1.11.1   v1.12.1
Scheduler            v1.11.1   v1.12.1
Kube Proxy           v1.11.1   v1.12.1
CoreDNS              1.1.3     1.2.2
Etcd                 3.2.18    3.2.24

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.12.1

由于k8s高可用集群里的master节点名称使用的是浮动IP,在升级时需要先改为实际IP

kubectl get configmap -n kube-system kubeadm-config -o yaml >kubeadm-config-cm.yaml

修改kubeadm-config-cm.yaml文件,将以下字段信息改为当前节点的IP地址
api.advertiseAddress
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
在etcd.local.extraArgs增加一个参数:initial-cluster-state: existing
将etcd.local.extraArgs.initial-cluster改为etcd集群地址信息,如

initial-cluster: k8s1=https://192.168.1.101:2380,k8s2=https://192.168.1.102:2380,k8s3=https://192.168.1.103:2380

修改完成后应用新配置文件

kubectl apply -f kubeadm-config-cm.yaml --force

执行以下命令开始升级

kubeadm upgrade apply v$VERSION

看到以下提示表示升级完成

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.1". Enjoy!

主节点升级完成后继续升级剩余节点

kubectl get configmap -n kube-system kubeadm-config -o yaml >kubeadm-config-cm.yaml

修改kubeadm-config-cm.yaml文件,将以下字段信息改为当前节点的IP地址
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
将etcd.local.extraArgs.name改为当前节点主机名
在ClusterStatus.apiEndpoints中加入当前节点信息,如:

  ClusterStatus: |
    apiEndpoints:
      k8s1.test.local:
        advertiseAddress: 192.168.1.101
        bindPort: 6443
      k8s2.test.local:
        advertiseAddress: 192.168.1.102
        bindPort: 6443

修改完成后应用新配置文件

kubectl apply -f kubeadm-config-cm.yaml --force

再为当前节点创建cri-socket注释

kubectl annotate node <nodename> kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock

执行以下命令开始升级

kubeadm upgrade apply v$VERSION

Master节点全部升级完成后,手工安装新版kubelet、kubectl

export VERSION=1.12.1
yum install -y kubelet-${VERSION} kubectl-${VERSION}
systemctl daemon-reload
systemctl restart kubelet