ceph-deploy部署ceph分部署集群

简介: 这篇博客详细介绍了如何使用ceph-deploy工具部署Ceph集群,包括环境准备、配置hosts、免密登录、时间同步、添加块设备、部署mon、mgr组件以及初始化OSD节点的步骤,并提供了在部署过程中可能遇到的问题和解决方案。

一.ceph-deploy部署ceph集群环境准备

1.环境准备,2c 2G(集群所有节点配置hosts)

cat >> /etc/hosts <<EOF
10.0.0.250 harbor250
10.0.0.141 ceph141
10.0.0.142 ceph142
10.0.0.143 ceph143a
10.0.0.144 ceph144
EOF

2.ceph-deploy节点免密登录ceph集群节点

    1.安装依赖包
[root@harbor250 ~]# yum -y install expect


    2.harbor250免密登录ceph集群节点
[root@harbor250 ~]# cat > password_free_login.sh <<'EOF'
#!/bin/bash
# auther: Jason Yin

# 创建密钥对
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q

# 声明你服务器密码,建议所有节点的密码均一致,否则该脚本需要再次进行优化
export mypasswd=yinzhengjie

# 定义主机列表
k8s_host_list=(ceph141 ceph142 ceph143)

# 配置免密登录,利用expect工具免交互输入
for i in ${k8s_host_list[@]};do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
  expect {
    \"*yes/no*\" {send \"yes\r\"; exp_continue}
    \"*password*\" {send \"$mypasswd\r\"; exp_continue}
  }"
done
EOF
sh password_free_login.sh

3.配置时间同步

    1.ceph[141-143]集群所有节点安装
yum -y install wget unzip chrony ntpdate


    2.ceph141作为服务端
[root@ceph141 ~]# vim /etc/chrony.conf
# 注释原有的时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst
...
allow 10.0.0.0/24
local stratum 10
[root@ceph141 ~]# 


    3.ceph142作为客户端
[root@ceph142 ~]# vim /etc/chrony.conf
# 注释原有的时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph141 iburst
...


    4.ceph143作为客户端
[root@ceph143 ~]# vim /etc/chrony.conf
# 注释原有的时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph141 iburst
...


    5. 所有节点检查时间是否同步
systemctl enable --now chronyd
timedatectl set-ntp true
timedatectl set-timezone Asia/Shanghai
chronyc activity -v

    参考案例:
[root@ceph143 ~]# chronyc activity -v
200 OK
1 sources online
0 sources offline
0 sources doing burst (return to online)
0 sources doing burst (return to offline)
0 sources with unknown address
[root@ceph143 ~]#

4.添加块设备

ceph集群各节点准备1块200GB的和1块300GB磁盘。查看各节点的配置


[root@ceph141 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph141 ~]# 



[root@ceph142 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph142 ~]# 


[root@ceph143 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph143 ~]#

5.彩蛋: 动态加载硬盘

[root@ceph142 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph142 ~]#
[root@ceph142 ~]# for i in `seq 0 2`; do echo "- - -" > /sys/class/scsi_host/host${i}/scan;done
[root@ceph142 ~]#
[root@ceph142 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sdd               8:48   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph142 ~]#

二.基于ceph-deploy工具部署ceph集群之mon组件

1 所有节点准备国内的软件源

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo


温馨提示:
    建议所有节点都要配置含基础镜像软件源和epel源

2.ceph[141-143]集群节点配置ceph软件源

cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF

3. 安装ceph-deploy工具

[root@harbor250 ~]# yum -y install ceph-deploy

4.安装"distribute"软件包

yum -y install gcc python-setuptools python-devel wget
wget https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip --no-check-certificate
unzip distribute-0.7.3.zip
cd distribute-0.7.3
python setup.py install


温馨提示:
    建议所有节点安装,会用到该软件包的"pkg_resources"模块。

5.ceph-deploy节点准备工作目录

[root@harbor250 ~]# mkdir -pv /yinzhengjie/softwares/ceph-cluster/ && cd /yinzhengjie/softwares/ceph-cluster/

6.初始化mon组件

    1 安装"ceph"和"ceph-radosgw"的基础软件包
[root@harbor250 ceph-cluster]# ceph-deploy install --no-adjust-repos ceph141 ceph142 ceph143 
...
[ceph141][DEBUG ] Complete!
...
[ceph142][DEBUG ] Complete!
... 
[ceph143][DEBUG ] Complete!
[ceph143][INFO  ] Running command: ceph --version
[ceph143][DEBUG ] ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
[root@ceph141 ceph-cluster]#


    2 安装"needs_ssh"的python依赖环境,此过程会升级"ceph-deploy"的版本
[root@harbor250 ceph-cluster]# ceph-deploy --version
1.5.25
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# cat > /etc/yum.repos.d/oldboyedu-ceph.repo <<EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
EOF
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# yum clean all
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# yum -y install ceph-deploy
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# ceph-deploy --version
1.5.39
[root@harbor250 ceph-cluster]# 


    3 开始部署一个新集群,此步骤会生成: ceph.conf,ceph.mon.keyring
[root@harbor250 ceph-cluster]# ceph-deploy new --public-network 10.0.0.0/24 ceph141 ceph142 ceph143
...
[ceph142][DEBUG ] IP addresses found: [u'10.0.0.141']
...
[ceph142][DEBUG ] IP addresses found: [u'10.0.0.142']
...
[ceph_deploy.new][DEBUG ] Monitor ceph143 at 10.0.0.143
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph141', 'ceph142', 'ceph143']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'10.0.0.141', u'10.0.0.142', u'10.0.0.143']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# cat ceph.conf 
[global]
fsid = 5821e29c-326d-434d-a5b6-c492527eeaad
public_network = 10.0.0.0/24
mon_initial_members = ceph141, ceph142, ceph143
mon_host = 10.0.0.141,10.0.0.142,10.0.0.143
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# cat ceph.mon.keyring 
[mon.]
key = AQA3FrplAAAAABAAkfo2+82aVglQlRbTmd0Hqg==
caps mon = allow *
[root@harbor250 ceph-cluster]# 



    4 部署mon组件,并启动ceph-mon进程会生成初始化配置文件,及启动脚本和相关程序
[root@harbor250 ceph-cluster]# ceph-deploy mon create-initial
...
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpX5fxSl
[root@harbor250 ceph-cluster]# 


    5 查看各节点是否监听6789端口,这是ceph集群对客户端通信的端口
[root@ceph141 ~]# ss -ntl
State       Recv-Q Send-Q                      Local Address:Port                                     Peer Address:Port              
...                 
LISTEN      0      128                            10.0.0.141:3300                                                *:*                  
LISTEN      0      128                            10.0.0.141:6789                                                *:*                  
...               
[root@ceph141 ~]# 


[root@ceph142 ~]# ss -ntl
State       Recv-Q Send-Q                      Local Address:Port                                     Peer Address:Port              
...                 
LISTEN      0      128                            10.0.0.141:3300                                                *:*                  
LISTEN      0      128                            10.0.0.141:6789                                                *:*                  
...               
[root@ceph142 ~]# 


[root@ceph143 ~]# ss -ntl
State       Recv-Q Send-Q                      Local Address:Port                                     Peer Address:Port              
...                 
LISTEN      0      128                            10.0.0.141:3300                                                *:*                  
LISTEN      0      128                            10.0.0.141:6789                                                *:*                  
...               
[root@ceph143 ~]# 


温馨提示:
    该步骤会生成貌似也会监听3300端口哟~目前看来该端口应该是集群内部数据传输的端口。
[root@ceph141 ~]# netstat -untalp | grep 6789
tcp        0      0 10.0.0.141:6789         0.0.0.0:*               LISTEN      2158/ceph-mon       
[root@ceph141 ~]# 
[root@ceph141 ~]# netstat -untalp | grep 3300
tcp        0      0 10.0.0.141:3300         0.0.0.0:*               LISTEN      2158/ceph-mon       
tcp        0      0 10.0.0.141:3300         10.0.0.142:45594        ESTABLISHED 2158/ceph-mon       
tcp        0      0 10.0.0.141:49036        10.0.0.143:3300         ESTABLISHED 2158/ceph-mon       
[root@ceph141 ~]#

三.基于ceph-deploy工具部署ceph集群指定管理节点

1. 指定ceph的管理节点

[root@harbor250 ceph-cluster]# ceph-deploy admin ceph141 ceph142 ceph143
...
[ceph141][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
...
[ceph142][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
...
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph143
[ceph143][DEBUG ] connected to host: ceph143 
[ceph143][DEBUG ] detect platform information from remote host
[ceph143][DEBUG ] detect machine type
[ceph143][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@harbor250 ceph-cluster]# 


温馨提示:
    此步骤,本质上只是将证书拷贝到指定的管理节点,对应的文件是"/etc/ceph/{ceph.client.admin.keyring,ceph.conf}"
[root@ceph141 ~]# ll /etc/ceph/{ceph.client.admin.keyring,ceph.conf}
-rw------- 1 root root 151 Jan 31 17:54 /etc/ceph/ceph.client.admin.keyring
-rw-r--r-- 1 root root 264 Jan 31 17:54 /etc/ceph/ceph.conf
[root@ceph141 ~]# 


[root@ceph142 ~]# ll /etc/ceph/{ceph.client.admin.keyring,ceph.conf}
-rw------- 1 root root 151 Jan 31 17:54 /etc/ceph/ceph.client.admin.keyring
-rw-r--r-- 1 root root 264 Jan 31 17:54 /etc/ceph/ceph.conf
[root@ceph142 ~]# 


[root@ceph143 ~]# ll /etc/ceph/{ceph.client.admin.keyring,ceph.conf}
-rw------- 1 root root 151 Jan 31 17:54 /etc/ceph/ceph.client.admin.keyring
-rw-r--r-- 1 root root 264 Jan 31 17:54 /etc/ceph/ceph.conf
[root@ceph143 ~]#

2. 在管理节点验证ceph集群状态

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 8m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

[root@ceph141 ~]#

3.禁用global_id

[root@ceph141 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 10m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

[root@ceph141 ~]#

四.基于ceph-deploy工具部署ceph集群之初始化OSD节点

1 初始化之前查看各节点的设备信息

[root@ceph141 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph141 ~]# 


[root@ceph142 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sdd               8:48   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph142 ~]# 


[root@ceph143 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  200G  0 disk 
sdc               8:32   0  300G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph143 ~]# 


温馨提示:
    初始化osd,如果在此阶段,卡主时间较长,超过1分钟,则可以尝试重启操作系统,基本上就会被解决,怀疑是跟磁盘热加载的问题。

2 更换ceph-deploy版本

[root@harbor250 ceph-cluster]# ceph-deploy --version
1.5.39
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# yum -y remove ceph-deploy
[root@harbor250 ceph-cluster]# yum -y install  python-pip
[root@harbor250 ceph-cluster]# pip install ceph-deploy==2.0.1 -i https://mirrors.aliyun.com/pypi/simple
[root@harbor250 ceph-cluster]# ceph-deploy --version
2.0.1
[root@harbor250 ceph-cluster]#

3 初始化ceph141的OSD设备

[root@harbor250 ceph-cluster]#  ceph-deploy osd create --data /dev/sdb ceph141
...
[ceph141][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph141][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph141][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph141][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph141][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph141][INFO  ] checking OSD status...
[ceph141][DEBUG ] find the location of an executable
[ceph141][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph141 is now ready for osd use.
[root@harbor250 ceph-cluster]#  
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# ceph-deploy osd create --data /dev/sdc ceph141
...
[ceph141][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[ceph141][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[ceph141][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph141][INFO  ] checking OSD status...
[ceph141][DEBUG ] find the location of an executable
[ceph141][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph141 is now ready for osd use. 
[root@harbor250 ceph-cluster]#

4.初始化ceph142的OSD设备

[root@harbor250 ceph-cluster]#  ceph-deploy osd create --data /dev/sdb ceph142
...
[ceph142][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph142][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph142][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph142][INFO  ] checking OSD status...
[ceph142][DEBUG ] find the location of an executable
[ceph142][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph142 is now ready for osd use.
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]#  ceph-deploy osd create --data /dev/sdc ceph142
...
[ceph142][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph142][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[ceph142][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[ceph142][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph142][INFO  ] checking OSD status...
[ceph142][DEBUG ] find the location of an executable
[ceph142][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph142 is now ready for osd use.
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# ceph-deploy osd create --data /dev/sdd ceph142
...
[ceph142][WARNIN] Running command: /bin/systemctl start ceph-osd@4
[ceph142][WARNIN] --> ceph-volume lvm activate successful for osd ID: 4
[ceph142][WARNIN] --> ceph-volume lvm create successful for: /dev/sdd
[ceph142][INFO  ] checking OSD status...
[ceph142][DEBUG ] find the location of an executable
[ceph142][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph142 is now ready for osd use.
[root@harbor250 ceph-cluster]#

5.初始化ceph143的OSD设备

[root@harbor250 ceph-cluster]# ceph-deploy osd create --data /dev/sdb ceph143
...
[ceph143][WARNIN] Running command: /bin/systemctl start ceph-osd@5
[ceph143][WARNIN] --> ceph-volume lvm activate successful for osd ID: 5
[ceph143][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph143][INFO  ] checking OSD status...
[ceph143][DEBUG ] find the location of an executable
[ceph143][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph143 is now ready for osd use.
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# ceph-deploy osd create --data /dev/sdc ceph143
...
[ceph143][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@6
[ceph143][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@6.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph143][WARNIN] Running command: /bin/systemctl start ceph-osd@6
[ceph143][WARNIN] --> ceph-volume lvm activate successful for osd ID: 6
[ceph143][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph143][INFO  ] checking OSD status...
[ceph143][DEBUG ] find the location of an executable
[ceph143][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph143 is now ready for osd use.
[root@harbor250 ceph-cluster]#

6.初始化之后各节点再次查看设备信息

[root@ceph141 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  200G  0 disk 
└─ceph--f5c696a7--6f03--4129--84c6--2cd1e057e234-osd--block--2e6612cc--fa0e--403b--9ea0--3023e6c536c6
                                                                                                    253:2    0  200G  0 lvm  
sdc                                                                                                   8:32   0  300G  0 disk 
└─ceph--80d13c4e--bbc1--42d2--8e6a--d93a7480adfc-osd--block--ee7ad091--20a7--4600--a94a--9c0281f8e79f
                                                                                                    253:3    0  300G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph141 ~]# 
[root@ceph141 ~]# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  centos                                      1   2   0 wz--n-  <19.00g    0 
  ceph-80d13c4e-bbc1-42d2-8e6a-d93a7480adfc   1   1   0 wz--n- <300.00g    0 
  ceph-f5c696a7-6f03-4129-84c6-2cd1e057e234   1   1   0 wz--n- <200.00g    0 
[root@ceph141 ~]# 



[root@ceph142 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  200G  0 disk 
└─ceph--014ad47e--0e5b--47d9--8659--8922e1f74212-osd--block--66310a40--46eb--4e47--8706--4ebc455c161d
                                                                                                    253:2    0  200G  0 lvm  
sdc                                                                                                   8:32   0  300G  0 disk 
└─ceph--9c925388--2a60--439c--be06--95112f5b126b-osd--block--3003810f--42ee--4a6d--bd5c--8878b9f2a307
                                                                                                    253:3    0  300G  0 lvm  
sdd                                                                                                   8:48   0  500G  0 disk 
└─ceph--8dd8cc11--0ba1--4a07--9b4e--83044fb181f2-osd--block--0f234c3b--a0b9--4912--a351--f0d39ae93834
                                                                                                    253:4    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph142 ~]# 
[root@ceph142 ~]# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  centos                                      1   2   0 wz--n-  <19.00g    0 
  ceph-014ad47e-0e5b-47d9-8659-8922e1f74212   1   1   0 wz--n- <200.00g    0 
  ceph-8dd8cc11-0ba1-4a07-9b4e-83044fb181f2   1   1   0 wz--n- <500.00g    0 
  ceph-9c925388-2a60-439c-be06-95112f5b126b   1   1   0 wz--n- <300.00g    0 
[root@ceph142 ~]# 



[root@ceph143 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  200G  0 disk 
└─ceph--2067a000--efbe--4e0a--804d--5a4f8d3a9e1c-osd--block--4c34a506--2fa0--47ad--9f01--1080d389dcd3
                                                                                                    253:2    0  200G  0 lvm  
sdc                                                                                                   8:32   0  300G  0 disk 
└─ceph--9b9d20ca--6eb4--45eb--bc96--3020f43af4b6-osd--block--4a6082bc--ba84--41f3--94d9--daff6942517f
                                                                                                    253:3    0  300G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph143 ~]# 
[root@ceph143 ~]# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  centos                                      1   2   0 wz--n-  <19.00g    0 
  ceph-2067a000-efbe-4e0a-804d-5a4f8d3a9e1c   1   1   0 wz--n- <200.00g    0 
  ceph-9b9d20ca-6eb4-45eb-bc96-3020f43af4b6   1   1   0 wz--n- <300.00g    0 
[root@ceph143 ~]# 


温馨提示:
    ceph对于设备的管理,底层采用了lvm技术。
[root@ceph143 ~]# lvs
  LV                                             VG                                        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                                           centos                                    -wi-ao----  <17.00g                                                    
  swap                                           centos                                    -wi-ao----    2.00g                                                    
  osd-block-e2ee73ae-c94e-4bb8-a0c4-ab24f7654237 ceph-2f9b8018-7242-4eae-9b89-454c56222d72 -wi-ao---- <200.00g                                                    
  osd-block-04eb39e9-1dc6-4446-930c-1c2434674b1e ceph-a72237c7-f9ec-4228-a3f3-1b4d5625fb62 -wi-ao---- <300.00g                                                    
[root@ceph143 ~]# 
[root@ceph143 ~]# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  centos                                      1   2   0 wz--n-  <19.00g    0 
  ceph-2f9b8018-7242-4eae-9b89-454c56222d72   1   1   0 wz--n- <200.00g    0 
  ceph-a72237c7-f9ec-4228-a3f3-1b4d5625fb62   1   1   0 wz--n- <300.00g    0 
[root@ceph143 ~]# 
[root@ceph143 ~]# pvs
  PV         VG                                        Fmt  Attr PSize    PFree
  /dev/sda2  centos                                    lvm2 a--   <19.00g    0 
  /dev/sdb   ceph-2f9b8018-7242-4eae-9b89-454c56222d72 lvm2 a--  <200.00g    0 
  /dev/sdc   ceph-a72237c7-f9ec-4228-a3f3-1b4d5625fb62 lvm2 a--  <300.00g    0 
[root@ceph143 ~]#

五.基于ceph-deploy工具部署ceph集群之初始化mgr

1 初始化mgr组件之前查看集群状态

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 32m)
    mgr: no daemons active
    osd: 7 osds: 7 up (since 5m), 7 in (since 5m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

[root@ceph141 ~]# 



温馨提示:
    如上所述初始化mgr组件之前查看集群状态,发现集群报错没有活跃的mgr组件,与此同时,也看不到usage字段中的可用空间
    而且如果咱们不初始化,则看不到集群的存储可用空间大小," usage:   0 B used, 0 B / 0 B avail"

2.初始化mgr组件

[root@harbor250 ceph-cluster]#  ceph-deploy mgr create ceph141 ceph142 ceph143
...
[ceph141][INFO  ] Running command: systemctl enable ceph.target
...
[ceph142][INFO  ] Running command: systemctl enable ceph.target
...
[ceph143][INFO  ] Running command: systemctl enable ceph-mgr@ceph143
[ceph143][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph143.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph143][INFO  ] Running command: systemctl start ceph-mgr@ceph143
[ceph143][INFO  ] Running command: systemctl enable ceph.target
[root@harbor250 ceph-cluster]#

3.初始化完成后,再次查看ceph集群状态信息

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 35m)
    mgr: ceph141(active, since 2m), standbys: ceph143, ceph142
    osd: 7 osds: 7 up (since 9m), 7 in (since 9m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   7.0 GiB used, 1.9 TiB / 2.0 TiB avail
    pgs:     

[root@ceph141 ~]#
目录
相关文章
|
4月前
|
存储 关系型数据库 算法框架/工具
Ceph 架构以及部署
Ceph 架构以及部署
170 26
|
6月前
|
存储 关系型数据库 网络安全
CEPH搭建
CEPH搭建
177 0
|
11月前
|
存储 Kubernetes 对象存储
Kubernetes存储:Ceph架构,部署和使用
Kubernetes存储:Ceph架构,部署和使用
156 0
|
域名解析 存储 块存储
ceph集群的搭建
ceph集群的搭建
365 1
|
块存储
ceph集群的搭建(下)
ceph集群的搭建
157 0
|
存储 Prometheus Kubernetes
实战篇:使用rook在k8s上搭建ceph集群
实战篇:使用rook在k8s上搭建ceph集群
915 0
|
存储 安全 块存储
一键部署ceph(luminous)集群脚本
一键部署ceph(luminous)集群脚本
|
存储 运维 监控
cephadm 安装部署 ceph 集群
块存储:提供像普通硬盘一样的存储,为使用者提供“硬盘” 文件系统存储:类似于NFS的共享方式,为使用者提供共享文件夹 对象存储:像百度云盘一样,需要使用单独的客户端 ceph还是一个分布式的存储系统,非常灵活。如果需要扩容,只要向ceph集中增加服务器即可。ceph存储数据时采用多副本的方式进行存储,生产环境下,一个文件至少要存3份。ceph默认也是三副本存储。
643 0
|
存储 关系型数据库 网络安全
手动部署ceph octopus集群
手动部署ceph octopus集群
手动部署ceph octopus集群
|
存储 关系型数据库 网络安全
使用ansible部署ceph集群
使用ansible部署ceph集群
使用ansible部署ceph集群