1、查看OSD的分布信息:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
# ceph osd tree
ID WEIGHT TYPE NAME UP
/DOWN
REWEIGHT PRIMARY-AFFINITY
-1 0.21950 root default
-2 0.04390 host hz-01-ops-tc-ceph-01
0 0.04390 osd.0 up 1.00000 1.00000
-3 0.04390 host hz-01-ops-tc-ceph-02
1 0.04390 osd.1 up 1.00000 1.00000
-4 0.04390 host hz-01-ops-tc-ceph-03
2 0.04390 osd.2 up 1.00000 1.00000
-5 0.04390 host hz-01-ops-tc-ceph-04
3 0.04390 osd.3 up 1.00000 1.00000
-6 0.04390 host hz01-dev-ops-wanl-01
4 0.04390 osd.4 up 1.00000 1.00000
|
2、将osd.4移除OSD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
# ceph osd out 4
marked out osd.4.
# ceph -s
cluster e2ca994a-00c4-477f-9390-ea3f931c5062
health HEALTH_WARN
56 pgs degraded
1 pgs recovering
55 pgs recovery_wait
56 pgs stuck unclean
recovery 604
/1692
objects degraded (35.697%)
monmap e1: 3 mons at {hz-01-ops-tc-ceph-02=172.16.2.231:6789
/0
,hz-01-ops-tc-ceph-03=172.16.2.172:6789
/0
,hz-01-ops-tc-ceph-04=172.16.2.181:6789
/0
}
election epoch 20, quorum 0,1,2 hz-01-ops-tc-ceph-03,hz-01-ops-tc-ceph-04,hz-01-ops-tc-ceph-02
osdmap e89: 5 osds: 5 up, 4
in
flags sortbitwise,require_jewel_osds
pgmap v68654: 1172 pgs, 4 pools, 2159 MB data, 564 objects
5491 MB used, 174 GB / 179 GB avail
604
/1692
objects degraded (35.697%)
1116 active+clean
55 active+recovery_wait+degraded
1 active+recovering+degraded
recovery io 87376 kB
/s
, 1 keys
/s
, 23 objects
/s
|
3、在被删除OSD的那台ceph节点上的操作:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
|
# systemctl stop ceph-osd@4
# ceph osd tree
ID WEIGHT TYPE NAME UP
/DOWN
REWEIGHT PRIMARY-AFFINITY
-1 0.21950 root default
-2 0.04390 host hz-01-ops-tc-ceph-01
0 0.04390 osd.0 up 1.00000 1.00000
-3 0.04390 host hz-01-ops-tc-ceph-02
1 0.04390 osd.1 up 1.00000 1.00000
-4 0.04390 host hz-01-ops-tc-ceph-03
2 0.04390 osd.2 up 1.00000 1.00000
-5 0.04390 host hz-01-ops-tc-ceph-04
3 0.04390 osd.3 up 1.00000 1.00000
-6 0.04390 host hz01-dev-ops-wanl-01
4 0.04390 osd.4 down 0 1.00000
# ceph osd crush remove osd.4
removed item
id
4 name
'osd.4'
from crush map
# ceph auth del osd.4
updated
# ceph osd tree
ID WEIGHT TYPE NAME UP
/DOWN
REWEIGHT PRIMARY-AFFINITY
-1 0.17560 root default
-2 0.04390 host hz-01-ops-tc-ceph-01
0 0.04390 osd.0 up 1.00000 1.00000
-3 0.04390 host hz-01-ops-tc-ceph-02
1 0.04390 osd.1 up 1.00000 1.00000
-4 0.04390 host hz-01-ops-tc-ceph-03
2 0.04390 osd.2 up 1.00000 1.00000
-5 0.04390 host hz-01-ops-tc-ceph-04
3 0.04390 osd.3 up 1.00000 1.00000
-6 0 host hz01-dev-ops-wanl-01
4 0 osd.4 down 0 1.00000
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root
39G 1.8G 37G 5% /
devtmpfs 486M 0 486M 0%
/dev
tmpfs 497M 84K 497M 1%
/dev/shm
tmpfs 497M 26M 472M 6%
/run
tmpfs 497M 0 497M 0%
/sys/fs/cgroup
/dev/vda1
1014M 121M 894M 12%
/boot
/dev/mapper/cl-home
19G 33M 19G 1%
/home
/dev/vdb1
45G 237M 45G 1%
/var/lib/ceph/osd/ceph-4
tmpfs 100M 0 100M 0%
/run/user/0
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 60G 0 disk
├─vda1 252:1 0 1G 0 part
/boot
└─vda2 252:2 0 59G 0 part
├─cl-root 253:0 0 38.3G 0 lvm /
├─cl-swap 253:1 0 2G 0 lvm [SWAP]
└─cl-home 253:2 0 18.7G 0 lvm
/home
vdb 252:16 0 50G 0 disk
├─vdb1 252:17 0 45G 0 part
/var/lib/ceph/osd/ceph-4
└─vdb2 252:18 0 5G 0 part
# ceph -s
cluster e2ca994a-00c4-477f-9390-ea3f931c5062
health HEALTH_OK
monmap e1: 3 mons at {hz-01-ops-tc-ceph-02=172.16.2.231:6789
/0
,hz-01-ops-tc-ceph-03=172.16.2.172:6789
/0
,hz-01-ops-tc-ceph-04=172.16.2.181:6789
/0
}
election epoch 20, quorum 0,1,2 hz-01-ops-tc-ceph-03,hz-01-ops-tc-ceph-04,hz-01-ops-tc-ceph-02
osdmap e79: 4 osds: 4 up, 4
in
flags sortbitwise,require_jewel_osds
pgmap v68497: 1164 pgs, 3 pools, 2159 MB data, 564 objects
6655 MB used, 173 GB / 179 GB avail
1164 active+clean
|
本文转自 冰冻vs西瓜 51CTO博客,原文链接:http://blog.51cto.com/molewan/2063598,如需转载请自行联系原作者