分类目录归档:存储

CentOS安装Ceph(使用Ceph-Deploy)

————————————– 预先配置 ————————————–
安装EPEL源

sudo yum install -y epel-release yum-plugin-priorities python2-pip

安装ceph源,其中mimic是当前ceph版本,如有更新的可以改为更新的版本名

cat << EOF > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
EOF

更新系统并安装ceph-deploy工具

sudo yum update -y
sudo yum install -y ceph-deploy

安装配置时间服务

yum install chrony
systemctl enable chronyd
systemctl start chronyd

添加ceph用户并设置密码

sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

配置ssh免密登录

ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa
ssh-copy-id ceph@ceph-node1
ssh-copy-id ceph@ceph-node2
ssh-copy-id ceph@ceph-node3

开启防火墙端口

monitor节点开启ceph-mon
sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
OSD和MDS节点开启ceph
sudo firewall-cmd --zone=public --add-service=ceph --permanent
重新载入防火墙配置
sudo firewall-cmd --reload

关闭SELinux

sudo setenforce 0
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config

————————————– 部署集群 ————————————–
本示例节点的信息如下:

+------------+---------------+---------+
|  Hostname  |   IP Address  |   Role  |
+------------+---------------+---------+
| ceph-node1 | 192.168.1.101 | mon,osd |
+------------+---------------+---------+
| ceph-node2 | 192.168.1.102 | osd     |
+------------+---------------+---------+
| ceph-node3 | 192.168.1.103 | osd     |
+------------+---------------+---------+

创建目录以供ceph-deploy工具输出用

mkdir my-cluster && cd my-cluster

若配置中间发生错误,可使用以下命令清空配置后重来

ceph-deploy purge ceph-node1 ceph-node2 ceph-node3
ceph-deploy purgedata ceph-node1 ceph-node2 ceph-node3
ceph-deploy forgetkeys
rm ceph.*

创建集群,该命令执行完后会生成ceph.conf文件

ceph-deploy new ceph-node1

如果node有多个IP需在ceph.conf中指定对外IP

在[global]区域中加入如下信息,如:
public network = 192.168.1.0/24

安装ceph软件

ceph-deploy install ceph-node1 ceph-node2 ceph-node3

初始化monitor以及生成key

ceph-deploy mon create-initial

复制配置文件和密码到其他管理节点,这里将复制到所有节点以便操作

ceph-deploy admin ceph-node1 ceph-node2 ceph-node3

创建manager守护进程
ceph-deploy mgr create ceph-node1
添加OSD,命令为ceph-deploy osd create –data {device} {ceph-node},示例中/dev/sdb每个节点可用的磁盘
如果主机提供的不是块设备而是LVM,则需要改为vg_name/lv_name

ceph-deploy osd create --data /dev/sdb ceph-node1
ceph-deploy osd create --data /dev/sdb ceph-node2
ceph-deploy osd create --data /dev/sdb ceph-node3

检查集群健康状态

ssh ceph-node1 sudo ceph health
ssh ceph-node1 sudo ceph -s

————————————– 扩展集群 ————————————–
在前面初始化集群里,我们使用了node1作为monitor节点,但如果node1挂了的话整个集群就不可用了。
所以一般高可用部署中monitor至少会部署三个节点(必须是奇数个),因此我们调整了整个集群的架构如下:

+------------+---------------+-----------------+
|  Hostname  |   IP Address  |       Role      |
+------------+---------------+-----------------+
| ceph-node1 | 192.168.1.101 | mon,osd,mgr,mds |
+------------+---------------+-----------------+
| ceph-node2 | 192.168.1.102 | mon,osd         |
+------------+---------------+-----------------+
| ceph-node3 | 192.168.1.103 | mon,osd         |
+------------+---------------+-----------------+

为了能使用CephFS,至少需要一台metadata服务器,这里将node1设置为metadata服务器

ceph-deploy mds create ceph-node1

增加monitor

ceph-deploy mon add ceph-node2 ceph-node3

确认monitor仲裁同步状态

ceph quorum_status --format json-pretty

增加ceph manager。manager使用主/备状态工作,当主节点故障时,备用节点会接管主节点

ceph-deploy mgr create ceph-node2 ceph-node3

确认备用manager状态

ssh ceph-node1 sudo ceph -s

EB、ZB、YB有多大?

1,000 bytes is a kilobyte (KB)

1,000,000 bytes is a megabyte (MB)

1,000,000,000 bytes is a gigabyte (GB)

1,000,000,000,000 bytes is a terabyte (TB)

1,000,000,000,000,000 bytes is a petabyte (PB)

1,000,000,000,000,000,000 bytes is an exabyte (EB)

1,000,000,000,000,000,000,000 bytes is a zettabyte (ZB)

1,000,000,000,000,000,000,000,000 bytes is a yottabyte (YB)

如何让VMware的管理员更清楚地明白NetApp的各种技术

把vFiler、MultiStore视为Virtual Machine(VM)
vFiler和MultiStore是NetApp的存储虚拟化技术

把Data Motion视为VMotion、Storage VMotion
Data Motion是NetApp的无缝数据迁移技术

把NetApp Cluster视为VMware FT、HA Cluster
NetApp的集群技术在大部分功能上同FT一样是实时地不停机切换

把Aggregate视为Resource Pool
NetApp中的Aggregate是由一堆磁盘组成的资源集

把ASIS(Deduplication)视为Transparent Page Sharing(TPS)
ASIS和TPS的概念是一致的,对重复的数据进行删除,只不过ASIS是针对磁盘,而TPS是针对内存

把FlexClone视为Clone
NetApp的FlexClone产生的clone副本不要占用空间,而VMware的Clone是1:1的复制

把VIF视为vSwitch
VIF是NetApp存储上部分网卡绑定后生成的虚拟网卡,其概念是和vSwitch是一致的

把Snapshot视为Snapshot
这个不用多解释了…

DAS,SAN,NAS的简单说明

很多人老是问我什么是DAS、SAN和NAS,其实他们的区别很简单

DAS: Direct Attached Storage,顾名思义及直接连接的存储。服务器与存储之间不经过任何交换机的即DAS,本地磁盘、通过HBA(Host Bus Adapter)直接连接的存储均为DAS。DAS的接口种类有:IDE/PATA(基本已淘汰)、SCSI(快淘汰了)、FC(DAS中很少使用该接口)和SAS(目前主流的接口)。 继续阅读

NetApp VSC 1.0 for vSphere

NetApp在其NOW网站上开放了NetApp® Virtual Storage Console 1.0 for VMware vSphere的下载

VSC的主要功能如下:

  • Support for ESX 4.0 and ESXi 4.0 hosts.
  • Limited support for ESX 3.5 and ESXi 3.5 hosts.
  • Viewing the status of storage controllers from a SAN (FC and iSCSI) perspective.
  • Viewing the status of storage controllers from a NAS (NFS) perspective.
  • Viewing the status of ESX hosts, including ESX version and overall status.
  • Checking at a glance whether the following are configured correctly, and if not, automatically setting the correct values without needing to access the ESX console. You can select multiple ESX hosts and update settings for all hosts with a single command.
    • Storage adapter timeouts
    • Multipathing settings
    • NFS settings
  • Setting credentials to access storage controllers.
  • Collecting diagnostic information from the ESX hosts, storage controllers, and Fibre Channel switches.
  • Tools to identify and correct misaligned disk partitions.
  • Tools to set guest operating system timeouts.

NetApp SMVI 2.0

NetApp公司于VMworld 2009大会上发布了SMVI(Snap Manager for Virtual Infrastructure)

SMVI2.0带来了以下功能:

  • Autosupport集成 (注:Autosupport是NetApp的自动告警系统)
  • 备份的增强及用户界面的重新设计
  • 快照命名的改变
  • 支持脚本
  • 恢复功能的增强
  • 支持单文件恢复 (这个极其有用)
  • 自助(Self-Service)恢复
  • 有限制的自助(Self-Service)恢复
  • 管理员辅助恢复
  • 恢复代理(Agent)

YouTube上有段视频是介绍SMVI2.0的: http://www.youtube.com/watch?v=VWy1Sc9dtGs (国内用户可能无法打开)

Top Reasons to use NetApp for Virtualized Data Centers

1 – NetApp continues to win awards for outstanding storage solutions for virtualization.  In 2009 it was from Microsoft.  NetApp provides outstanding solutions for customers looking to design or deploy Hyper-V R2.

2 – If you want to see the latest demonstrations of NetApp technologies for virtual environments, check out the Virtualization Channel on NetAppTV. 继续阅读