ceph集群的OSD设备扩缩容实战指南

简介: 这篇文章详细介绍了Ceph集群中OSD设备的扩容和缩容过程,包括如何添加新的OSD设备、如何准备和部署,以及如何安全地移除OSD设备并从Crushmap中清除相关配置。

一.ceph集群的OSD设备扩容实战

1.添加osd的准备条件

为什么要添加OSD?
    因为随着我们对ceph集群的使用,资源可能会被消耗殆尽,这个时候就得想法扩容集群资源,对于ceph存储资源的扩容,我们只需要添加相应的OSD节点即可。

添加OSD的准备条件:
    我们需要单独添加一个新的节点并为其多添加2块500GB磁盘。

如果设备不存在,则参考如下操作。
[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 
[root@ceph144 ~]# 
[root@ceph144 ~]# for i in `seq 0 2`; do echo "- - -" > /sys/class/scsi_host/host${i}/scan;done
[root@ceph144 ~]# 
[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  500G  0 disk 
sdc               8:32   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]#

2.新增节点部署ceph软件包

    1 备国内的软件源(含基础镜像软件源和epel源)
[root@ceph144 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph144 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

    2 所有节点配置ceph软件源
[root@ceph144 ~]# cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF

    3 安装ceph环境的基础包
[root@ceph144 ~]# yum -y install ceph-osd

3.管理端添加osd设备前的状态查看

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 7 osds: 7 up (since 22h), 7 in (since 22h)

  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   7.8 GiB used, 1.9 TiB / 2.0 TiB avail
    pgs:     96 active+clean

[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
[root@ceph141 ~]#

4.待添加节点的设备状态

[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  500G  0 disk 
sdc               8:32   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]#

5.ceph-deploy节点配置和待添加节点免密要登录

[root@harbor250 ~]# ssh-copy-id ceph144

6.ceph-deploy节点开始添加设备

[root@harbor250 ~]#  cd /yinzhengjie/softwares/ceph-cluster
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# ceph-deploy osd create ceph144 --data /dev/sdb  # 添加ceph144节点的"/dev/sdb"磁盘设备
...
[ceph144][WARNIN] Running command: /bin/systemctl start ceph-osd@7
[ceph144][WARNIN] --> ceph-volume lvm activate successful for osd ID: 7
[ceph144][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph144][INFO  ] checking OSD status...
[ceph144][DEBUG ] find the location of an executable
[ceph144][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph144 is now ready for osd use.
[root@harbor250 ceph-cluster]#  
[root@harbor250 ceph-cluster]# ceph-deploy osd create ceph144 --data /dev/sdc  # 添加ceph144节点的"/dev/sdc"磁盘设备
...
[ceph144][WARNIN] Running command: /bin/systemctl start ceph-osd@8
[ceph144][WARNIN] --> ceph-volume lvm activate successful for osd ID: 8
[ceph144][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph144][INFO  ] checking OSD status...
[ceph144][DEBUG ] find the location of an executable
[ceph144][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph144 is now ready for osd use.
[root@harbor250 ceph-cluster]#

7.查看客户端添加后的设备状态

[root@ceph144 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  500G  0 disk 
└─ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
                                                                                                    253:2    0  500G  0 lvm  
sdc                                                                                                   8:32   0  500G  0 disk 
└─ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
                                                                                                    253:3    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph144 ~]#

8.管理端再次查看osd状态

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 9 osds: 9 up (since 91s), 9 in (since 91s)

  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   10 GiB used, 2.9 TiB / 2.9 TiB avail
    pgs:     96 active+clean

[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7        up  1.00000 1.00000 
 8   hdd 0.48830         osd.8        up  1.00000 1.00000 
[root@ceph141 ~]#

二.ceph集群OSD设备的缩容实战

1.服务端找出osd对应的一串编码

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7        up  1.00000 1.00000 
 8   hdd 0.48830         osd.8        up  1.00000 1.00000 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd dump | egrep "osd.7|osd.8"
osd.7 up   in  weight 1 up_from 548 up_thru 563 down_at 0 last_clean_interval [0,0) [v2:10.0.0.144:6800/12665,v1:10.0.0.144:6801/12665] [v2:10.0.0.144:6802/12665,v1:10.0.0.144:6803/12665] exists,up ec3ba06b-cacf-4392-820e-155c3b0b675d
osd.8 up   in  weight 1 up_from 564 up_thru 573 down_at 0 last_clean_interval [0,0) [v2:10.0.0.144:6808/13111,v1:10.0.0.144:6809/13111] [v2:10.0.0.144:6810/13111,v1:10.0.0.144:6811/13111] exists,up 1b134e65-ef8b-4464-932e-14bb23ebfc4e
[root@ceph141 ~]#

2.客户端查看编码是否对应

[root@ceph144 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  500G  0 disk 
└─ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
                                                                                                    253:2    0  500G  0 lvm  
sdc                                                                                                   8:32   0  500G  0 disk 
└─ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
                                                                                                    253:3    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph144 ~]#

3.服务端剔除OSD

    1 单独开一个终端执行
[root@ceph142 ~]# ceph -w  # 可以查看ceph的数据迁移情况,可以单独开一个终端执行
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 9 osds: 9 up (since 5m), 9 in (since 5m)

  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   10 GiB used, 2.9 TiB / 2.9 TiB avail
    pgs:     96 active+clean


...  # 当我们执行"ceph osd out ..."相关代码时,就会出现如下的所示信息。

2024-02-01 16:57:27.280545 mon.ceph141 [INF] Client client.admin marked osd.7 out, while it was still marked up
2024-02-01 16:57:32.497417 mon.ceph141 [WRN] Health check failed: Degraded data redundancy: 11/222 objects degraded (4.955%), 2 pgs degraded (PG_DEGRADED)
2024-02-01 16:57:38.912016 mon.ceph141 [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11/222 objects degraded (4.955%), 2 pgs degraded)
2024-02-01 16:57:38.912062 mon.ceph141 [INF] Cluster is now healthy
2024-02-01 16:58:29.198849 mon.ceph141 [INF] Client client.admin marked osd.8 out, while it was still marked up
2024-02-01 16:58:33.044142 mon.ceph141 [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)
2024-02-01 16:58:33.044162 mon.ceph141 [WRN] Health check failed: Degraded data redundancy: 36/222 objects degraded (16.216%), 7 pgs degraded (PG_DEGRADED)
2024-02-01 16:58:38.462308 mon.ceph141 [WRN] Health check update: Reduced data availability: 6 pgs peering (PG_AVAILABILITY)
2024-02-01 16:58:39.469179 mon.ceph141 [WRN] Health check update: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)
2024-02-01 16:58:39.469198 mon.ceph141 [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering)
2024-02-01 16:58:42.549459 mon.ceph141 [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1 pg degraded)
2024-02-01 16:58:42.549485 mon.ceph141 [INF] Cluster is now healthy
...


    2 再次单独开一个终端执行
[root@ceph141 ~]# ceph osd out osd.7  # 当剔除OSD时,权重会变为0
marked out osd.7. 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd out osd.8
marked out osd.8. 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7        up        0 1.00000 
 8   hdd 0.48830         osd.8        up        0 1.00000 
[root@ceph141 ~]#

4.客户端的osd节点停止osd相关进程

[root@ceph144 ~]# ps -ef | grep ceph
root       12299       1  0 16:48 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
ceph       12665       1  0 16:50 ?        00:00:05 /usr/bin/ceph-osd -f --cluster ceph --id 7 --setuser ceph --setgroup ceph
ceph       13111       1  0 16:51 ?        00:00:05 /usr/bin/ceph-osd -f --cluster ceph --id 8 --setuser ceph --setgroup ceph
root       13242   12245  0 17:00 pts/1    00:00:00 grep --color=auto ceph
[root@ceph144 ~]# 
[root@ceph144 ~]# 
[root@ceph144 ~]# systemctl disable --now ceph-osd@7
[root@ceph144 ~]# 
[root@ceph144 ~]# systemctl disable --now ceph-osd@8
[root@ceph144 ~]# 
[root@ceph144 ~]# ps -ef | grep ceph
root       12299       1  0 16:48 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
root       13293   12245  0 17:00 pts/1    00:00:00 grep --color=auto ceph
[root@ceph144 ~]#

5.服务端再次查看OSD状态

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7      down        0 1.00000 
 8   hdd 0.48830         osd.8      down        0 1.00000 
[root@ceph141 ~]#

6.在管理节点删除osd

    1 删除osd认证密钥
[root@ceph141 ~]# ceph auth del osd.7
updated
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph auth del osd.8
updated
[root@ceph141 ~]# 

    2 删除osd,注意观察osd的状态为DNE
[root@ceph141 ~]# ceph osd rm 7
removed osd.7
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd rm 8
removed osd.8
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7       DNE        0         
 8   hdd 0.48830         osd.8       DNE        0         
[root@ceph141 ~]#

7.客户端解除ceph对磁盘的占用

[root@ceph144 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  500G  0 disk 
└─ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
                                                                                                    253:2    0  500G  0 lvm  
sdc                                                                                                   8:32   0  500G  0 disk 
└─ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
                                                                                                    253:3    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 
[root@ceph144 ~]# dmsetup status
ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e: 0 1048567808 linear 
centos-swap: 0 4194304 linear 
centos-root: 0 35643392 linear 
ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d: 0 1048567808 linear 
[root@ceph144 ~]# 
[root@ceph144 ~]# dmsetup remove ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
[root@ceph144 ~]# 
[root@ceph144 ~]# dmsetup remove ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
[root@ceph144 ~]# 
[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  500G  0 disk 
sdc               8:32   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]#

8.管理端清除osd的DNE状态,从crush中移除

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7       DNE        0         
 8   hdd 0.48830         osd.8       DNE        0         
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd crush remove osd.7
removed item id 7 name 'osd.7' from crush map
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd crush remove osd.8
removed item id 8 name 'osd.8' from crush map
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9             0     host ceph144                         
[root@ceph141 ~]#

9.管理端移除主机

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9             0     host ceph144                         
[root@ceph141 ~]#  
[root@ceph141 ~]# ceph osd crush remove ceph144
removed item id -9 name 'ceph144' from crush map
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
[root@ceph141 ~]#

10.验证集群状态,发现集群数据缩容成功

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 7 osds: 7 up (since 6m), 7 in (since 8m)

  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   7.8 GiB used, 1.9 TiB / 2.0 TiB avail
    pgs:     96 active+clean

[root@ceph141 ~]#
目录
相关文章
|
存储 算法 关系型数据库
【CEPH-初识篇】ceph详细介绍、搭建集群及使用,带你认识新大陆
你好,我是无名小歌。 今天给大家分享一个分布式存储系统ceph。 什么是ceph? Ceph在一个统一的系统中独特地提供对象、块和文件存储。Ceph 高度可靠、易于管理且免费。Ceph 的强大功能可以改变您公司的 IT 基础架构和管理大量数据的能力。Ceph 提供了非凡的可扩展性——数以千计的客户端访问 PB 到 EB 的数据。ceph存储集群相互通信以动态复制和重新分配数据。
1251 0
【CEPH-初识篇】ceph详细介绍、搭建集群及使用,带你认识新大陆
|
2月前
|
存储
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
这篇文章是关于Ceph Reef(18.2.X)版本中CephFS高可用集群的实战案例,涵盖了CephFS的基础知识、一主一从架构的搭建、多主一从架构的配置、客户端挂载方式以及fuse方式访问CephFS的详细步骤和配置。
77 3
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
|
2月前
|
存储 块存储
Ceph Reef(18.2.X)集群的OSD管理基础及OSD节点扩缩容
这篇文章是关于Ceph Reef(18.2.X)集群的OSD管理基础及OSD节点扩缩容的详细教程,涵盖了OSD的基础操作、节点缩容的步骤和实战案例以及OSD节点扩容的基本流程和实战案例。
79 6
|
2月前
|
存储 Kubernetes 数据安全/隐私保护
k8s对接ceph集群的分布式文件系统CephFS
文章介绍了如何在Kubernetes集群中使用CephFS作为持久化存储,包括通过secretFile和secretRef两种方式进行认证和配置。
82 5
|
4月前
|
存储 关系型数据库 算法框架/工具
Ceph 架构以及部署
Ceph 架构以及部署
173 26
|
域名解析 存储 块存储
ceph集群的搭建
ceph集群的搭建
367 1
|
块存储
ceph集群的搭建(下)
ceph集群的搭建
157 0
|
存储 Prometheus Kubernetes
实战篇:使用rook在k8s上搭建ceph集群
实战篇:使用rook在k8s上搭建ceph集群
930 0
|
存储 关系型数据库 网络安全
手动部署ceph octopus集群
手动部署ceph octopus集群
手动部署ceph octopus集群
|
Prometheus 监控 Cloud Native
使用 Promwwwzs12558comI3578II9877etheus 监控 Ceph
本文是在 Ubuntu 16.04 最新版基础上安装 Prometheus 监控系统,Ceph 版本为 Luminous 12.2.8。 1. 安装 Prometheus 直接使用 apt 安装的 Prometheus 版本较低,很多新的配置选项都已不再支持,建议使用 Prometheus 的安装包,接下来看看安装包部署的步骤。
2930 0