操作命令:
ceph osd getcrushmap -o map_old 导出map
crushtool -d map_old -o map_old.txt 转化成可编辑格式
crushtool -c map_new.txt -o map_new 还原为map
ceph osd setcrushmap -i map_new 将map导入ceph
更改Crush map流程:
1、修改配置文件,防止ceph自动更新crushmap
1
2
|
echo
'osd_crush_update_on_start = false'
>>
/etc/ceph/ceph
.conf
/etc/init
.d
/ceph
restart
|
2、导出map,并转化成可编辑格式
1
2
3
|
ceph osd getcrushmap -o map_old
crushtool -d map_old -o map_old.txt
cp
map_old.txt map_new.txt
|
3、更改map_new.txt
如:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
|
##################sas
host node1-sas {
id
-2
# do not change unnecessarily
# weight 0.120
alg straw
hash
0
# rjenkins1
item osd.0 weight 0.040
item osd.1 weight 0.040
}
host node2-sas {
id
-3
# do not change unnecessarily
# weight 0.120
alg straw
hash
0
# rjenkins1
item osd.3 weight 0.040
item osd.4 weight 0.040
}
##################ssd
host node1-ssd {
id
-5
# do not change unnecessarily
# weight 0.120
alg straw
hash
0
# rjenkins1
item osd.2 weight 0.040
}
host node2-ssd {
id
-6
# do not change unnecessarily
# weight 0.120
alg straw
hash
0
# rjenkins1
item osd.5 weight 0.040
}
#################pool
root sas-pool {
id
-1
# do not change unnecessarily
# weight 0.360
alg straw
hash
0
# rjenkins1
item node1-sas weight 0.080
item node2-sas weight 0.080
}
root ssd-pool {
id
-8
# do not change unnecessarily
# weight 0.360
alg straw
hash
0
# rjenkins1
item node1-ssd weight 0.040
item node2-ssd weight 0.040
}
##################rule
rule sas {
ruleset 0
type
replicated
min_size 1
max_size 10
step take sas-pool
step choose firstn 0
type
osd
step emit
}
rule ssd {
ruleset 1
type
replicated
min_size 1
max_size 10
step take ssd-pool
step choose firstn 0
type
osd
step emit
}
|
4、将修改后的crushmap编译并且注入集群中
1
2
3
|
crushtool -c map_new.txt -o map_new
ceph osd setcrushmap -i map_new
ceph osd tree
|
5、创建资源池,使用新创建的规则
1
2
3
4
|
ceph osd pool create sas 128 128
ceph osd pool create ssd 128 128
ceph osd pool
set
sas crush_ruleset 0
ceph osd pool
set
ssd crush_ruleset 1
|
6、重新对pool授权(如果对接过openstack)
1
2
3
|
ceph auth del client.cinder
ceph auth get-or-create client.cinder mon
'allow r'
osd 'allow class-
read
object_prefix rbd_children,allow rwx pool=volumes, \
allow rwx pool=vms, allow rx pool=images, allow rwx pool=ssd'
|
更改Monitor IP流程:
1、获取monmap,并查看
1
2
|
ceph mon getmap -o map
monmaptool --print map
|
2、删除旧的map配置,新增配置到map
1
2
3
|
monmaptool --
rm
node1 --
rm
node2 --
rm
node3 map
monmaptool --add node1 10.0.2.21:6789 --add node2 10.0.2.22:6789 --add node3 10.0.2.23:6789 map
monmaptool --print map
|
3、复制新map到所有mon节点
1
2
|
scp
mon node2:~
scp
mon node3:~
|
4、更改/etc/ceph/ceph.conf中的mon_host(所有mon节点执行)
1
2
|
vim
/etc/ceph/ceph
.conf
mon_host= new1_ip:6789,new2_ip:6789,new3_ip:6789:
|
5、停止mon进程(所有mon节点执行)
1
|
/etc/init
.d
/ceph
stop mon
|
6、载入新的monmap(所有mon节点执行)
1
|
ceph-mon -i node1 --inject-monmap map
|
7、启动mon(所有mon节点执行)
1
|
/etc/init
.d
/ceph
start mon
|
本文转自Jacken_yang 51CTO博客,原文链接:http://blog.51cto.com/linuxnote/1790758,如需转载请自行联系原作者