Ceph管理端的操作:
1
2
3
4
|
# rbd ls -p openshift
openshift-storage01
# rbd resize openshift/openshift-storage01 --size 20000
Resizing image: 100% complete...
done
.
|
Ceph客户端的操作:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
# rbd showmapped
id
pool image snap device
0 openshift openshift-storage01 -
/dev/rbd0
# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/cl-root
179G 5.7G 174G 4% /
devtmpfs 1.9G 0 1.9G 0%
/dev
tmpfs 1.9G 316K 1.9G 1%
/dev/shm
tmpfs 1.9G 49M 1.9G 3%
/run
tmpfs 1.9G 0 1.9G 0%
/sys/fs/cgroup
/dev/vda1
297M 122M 176M 41%
/boot
tmpfs 380M 16K 380M 1%
/run/user/42
tmpfs 380M 48K 380M 1%
/run/user/0
/dev/vdb
1008G 92G 866G 10%
/mnt/backup
tmpfs 380M 0 380M 0%
/run/user/1001
/dev/rbd0
10G 2.0G 8.1G 20%
/cephmount
# xfs_growfs -d /cephmount/
meta-data=
/dev/rbd0
isize=512 agcount=17, agsize=162816 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2621440 to 5120000
说明:容量已经扩展了
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
224G 212G 12736M 5.53
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 422 0 68433M 7
openstack 1 2159M 3.06 68433M 554
cloudstack-
test
2 194 0 68433M 6
test
3 0 0 68433M 0
openshift 4 2014M 2.86 68433M 539
|
本文转自 冰冻vs西瓜 51CTO博客,原文链接:http://blog.51cto.com/molewan/2063761,如需转载请自行联系原作者