在centos或rhel中, ceph服务可以通过ceph sysv脚本来管理, 例如用来管理mds, osd, mon节点.
如下 :
例如我们要管理osd.0 daemon, 那么可以使用类似如下命令 :
管理mon1例如 :
# less /etc/init.d/ceph
usage_exit() {
echo "usage: $0 [options] {start|stop|restart|condrestart} [mon|osd|mds]..."
printf "\t-c ceph.conf\n"
printf "\t--cluster [cluster name]\tdefine the cluster name\n"
printf "\t--valgrind\trun via valgrind\n"
printf "\t--hostname [hostname]\toverride hostname lookup\n"
exit
}
for name in $what; do
type=`echo $name | cut -c 1-3` # e.g. 'mon', if $item is 'mon1'
id=`echo $name | cut -c 4- | sed 's/^\\.//'`
num=$id
name="$type.$id"
例如我们要管理osd.0 daemon, 那么可以使用类似如下命令 :
service ceph start osd.0
管理mon1例如 :
sudo /etc/init.d/ceph start mon.mon1
mon 节点和 osd节点都涉及数据目录, 特别是osd, 数据目录的读写可能会比较频繁.
默认的话, 目录在哪里呢, 从/etc/init.d/ceph脚本可以看到, 都在/var/lib/ceph下面, 例如 :
OSD :
MON :
/var/lib/ceph/osd/$cluster-$id
/var/lib/ceph/mon/ceph-$id
OSD :
[root@osd1 ~]# cd /var/lib/ceph/osd/
[root@osd1 osd]# ll
total 4
drwxr-xr-x 3 root root 4096 Dec 2 09:00 ceph-0
[root@osd1 osd]# cd ceph-0/
[root@osd1 ceph-0]# ll
total 1048616
-rw-r--r-- 1 root root 37 Dec 2 08:52 ceph_fsid
drwxr-xr-x 60 root root 4096 Dec 2 09:45 current
-rw-r--r-- 1 root root 37 Dec 2 08:52 fsid
-rw-r--r-- 1 root root 1073741824 Dec 3 09:45 journal
-rw------- 1 root root 56 Dec 2 08:52 keyring
-rw-r--r-- 1 root root 21 Dec 2 08:52 magic
-rw-r--r-- 1 root root 6 Dec 2 08:52 ready
-rw-r--r-- 1 root root 4 Dec 2 08:52 store_version
-rw-r--r-- 1 root root 53 Dec 2 08:52 superblock
-rw-r--r-- 1 root root 0 Dec 2 08:59 sysvinit
-rw-r--r-- 1 root root 2 Dec 2 08:52 whoami
MON :
[root@mon1 ~]# cd /var/lib/ceph/mon/
[root@mon1 mon]# cd ceph-mon1
[root@mon1 ceph-mon1]# ll
total 8
-rw-r--r-- 1 root root 0 Dec 1 15:52 done
-rw-r--r-- 1 root root 77 Dec 1 15:52 keyring
drwxr-xr-x 2 root root 4096 Dec 1 17:19 store.db
-rw-r--r-- 1 root root 0 Dec 1 15:52 sysvinit
为了获得更好的性能, 我们一般建议.
OSD的journal使用SSD.
OSD的数据目录使用普通机械盘, 同时和操作系统使用的机械盘分开.
mon的数据目录至少和操作系统使用的盘分开.
修改举例如下 :
MON
OSD
停止服务
[root@mon1 ~]# service ceph stop mon.mon1
=== mon.mon1 ===
Stopping Ceph mon.mon1 on mon1...kill 18469...done
将数据目录移动到其他磁盘分区
[root@mon1 ~]# mv /var/lib/ceph/mon/ceph-mon1 /data01/
[root@mon1 ~]# ln -s /data01/ceph-mon1 /var/lib/ceph/mon/
重启服务
[root@mon1 ~]# service ceph start mon.mon1
=== mon.mon1 ===
Starting Ceph mon.mon1 on mon1...
Starting ceph-create-keys on mon1...
检查
[root@mon1 ~]# ceph -s
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e1: 3 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0}, election epoch 16, quorum 0,1,2 mon1,mon2,mon3
osdmap e32: 4 osds: 4 up, 4 in
pgmap v202: 128 pgs, 1 pools, 0 bytes data, 0 objects
6435 MB used, 31257 MB / 39805 MB avail
128 active+clean
OSD
停止服务
[root@osd1 ~]# service ceph stop osd.0
=== osd.0 ===
Stopping Ceph osd.0 on osd1...kill 1776...kill 1776...done
移动数据目录
[root@osd1 ~]# cd /var/lib/ceph/osd
[root@osd1 osd]# ll
total 4
drwxr-xr-x 3 root root 4096 Dec 2 09:00 ceph-0
[root@osd1 osd]# mv ceph-0 /data01/
[root@osd1 osd]# ln -s /data01/ceph-0 /var/lib/ceph/osd/
移动JOURNAL
[root@osd1 ceph-0]# cd /var/lib/ceph/osd/ceph-0/
[root@osd1 ceph-0]# ll
total 1048612
-rw-r--r-- 1 root root 37 Dec 2 08:52 ceph_fsid
drwxr-xr-x 60 root root 4096 Dec 2 09:45 current
-rw-r--r-- 1 root root 37 Dec 2 08:52 fsid
-rw-r--r-- 1 root root 1073741824 Dec 3 09:45 journal
-rw------- 1 root root 56 Dec 2 08:52 keyring
-rw-r--r-- 1 root root 21 Dec 2 08:52 magic
-rw-r--r-- 1 root root 6 Dec 2 08:52 ready
-rw-r--r-- 1 root root 4 Dec 2 08:52 store_version
-rw-r--r-- 1 root root 53 Dec 2 08:52 superblock
-rw-r--r-- 1 root root 0 Dec 2 08:59 sysvinit
-rw-r--r-- 1 root root 2 Dec 2 08:52 whoami
[root@osd1 ceph-0]# mv journal /data02/
[root@osd1 ceph-0]# ln -s /data02/journal ./
[root@osd1 ceph-0]# ll
total 36
-rw-r--r-- 1 root root 37 Dec 2 08:52 ceph_fsid
drwxr-xr-x 60 root root 4096 Dec 2 09:45 current
-rw-r--r-- 1 root root 37 Dec 2 08:52 fsid
lrwxrwxrwx 1 root root 15 Dec 3 11:20 journal -> /data02/journal
-rw------- 1 root root 56 Dec 2 08:52 keyring
-rw-r--r-- 1 root root 21 Dec 2 08:52 magic
-rw-r--r-- 1 root root 6 Dec 2 08:52 ready
-rw-r--r-- 1 root root 4 Dec 2 08:52 store_version
-rw-r--r-- 1 root root 53 Dec 2 08:52 superblock
-rw-r--r-- 1 root root 0 Dec 2 08:59 sysvinit
-rw-r--r-- 1 root root 2 Dec 2 08:52 whoami
启动服务
[root@osd1 ceph-0]# service ceph start osd.0
=== osd.0 ===
create-or-move updated item name 'osd.0' weight 0.4 at location {host=osd1,root=default} to crush map
Starting Ceph osd.0 on osd1...
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
检查状态
[root@osd1 ceph-0]# ceph -s
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e1: 3 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0}, election epoch 16, quorum 0,1,2 mon1,mon2,mon3
osdmap e36: 4 osds: 4 up, 4 in
pgmap v211: 128 pgs, 1 pools, 0 bytes data, 0 objects
12288 MB used, 424 GB / 438 GB avail
128 active+clean
当然, 也可以通过修改ceph集群配置文件来修改他们的位置,
参考 :
src/sample.ceph.conf
[参考]
# The monitor's data location
# Default: /var/lib/ceph/mon/$cluster-$id
;mon data = /var/lib/ceph/mon/$name
# Default: /var/lib/ceph/osd/$cluster-$id
;osd data = /var/lib/ceph/osd/$name
# Default: /var/lib/ceph/osd/$cluster-$id/journal
;osd journal = /var/lib/ceph/osd/$name/journal
[参考]
1. /etc/init.d/ceph
get_conf osd_data "/var/lib/ceph/osd/$cluster-$id" "osd data"
get_conf mon_data "/var/lib/ceph/mon/ceph-$id" "mon data"
if [ "$mon_data" = "/var/lib/ceph/mon/ceph-$id" -a "$asok" = "/var/run/ceph/ceph-mon.$id.asok" ]; then
echo Starting ceph-create-keys on $host...
cmd2="$SBINDIR/ceph-create-keys --cluster $cluster -i $id 2> /dev/null &"
do_cmd "$cmd2"
fi