V 6 iSCSI & cLVM & gfs2

简介:

gfs2global file system version2,全局文件系统,CFS集群文件系统,利用HA的信息层,向各node通告自己所持有锁的信息)

cLVMcluster logical volume management,集群逻辑卷管理,将共享存储做成逻辑卷,借用HA的心跳传输机制(通信机制,对于脑裂处理的机制),各node要启动clvmd服务(此服务启动前要启动cmanrgmanager),使得各node彼此间通信)

准备四个nodenode{1,2,3}为使用共享存储,node4为提供共享存储并作为跳板机)

node{1,2,3}准备好yum源、时间要同步、节点名称、/etc/hostsnode4上要能与node{1,2,3}双机互信

 

1)准备共享存储

node4-side

[root@node4 ~]# vim /etc/tgt/targets.conf

default-driver iscsi

<targetiqn.2015-07.com.magedu:teststore.disk1>

   <backing-store /dev/sdb>

       vender_id magedu

       lun 1

   </backing-store>

   incominguser iscsi iscsi

   initiator-address 192.168.41.131

   initiator-address 192.168.41.132

   initiator-address 192.168.41.133

</target>

[root@node4 ~]# service tgtd restart

[root@node4 ~]# netstat -tnlp3260/tcptgtd

[root@node4 ~]# tgtadm --lld iscsi --mode target --op show

……

LUN: 1

……

   Account information:

       iscsi

   ACL information:

       192.168.41.131

       192.168.41.132

       192.168.41.133

[root@node4 ~]# alias ha='for I in{1..3};do ssh node$I'

[root@node4 ~]# ha ‘rm -rf /var/lib/iscsi/send_targets/*’;done

 

node{1,2,3}-side

[root@node1 ~]# vim /etc/iscsi/iscsid.conf

node.session.auth.authmethod = CHAP

node.session.auth.username = iscsi

node.session.auth.password = iscsi

 

node4-side

[root@node4 ~]# ha ‘service iscsi restart’;done

[root@node4 ~]# ha ‘iscsiadm -m discovery -t st -p 192.168.41.134’;done

[root@node4 ~]# ha ‘iscsiadm -m node -T iqn.2015-07.com.magedu:teststore.disk1 -p 192.168.41.134 -l’;done

[root@node1 ~]# fdisk -l

Disk /dev/sdb: 10.7 GB, 10737418240 bytes

 

2)安装cmanrgmanagergfs2-utilslvm2-cluster

node4-side

[root@node4 ~]# for I in {1..3};do scp /root/{cman*,rgmanager*,gfs2-utils*,lvm2-cluster*} node$I:/root/;ssh node$I 'yum -y --nogpgcheck localinstall /root/*.rpm';done

 

node1-side

[root@node1 ~]# ccs_tool create tcluster

[root@node1 ~]# ccs_tool addfence meatware fence_manual

[root@node1 ~]# ccs_tool addnode -v 1 -n 1 -f meatware node1.magedu.com

[root@node1 ~]# ccs_tool addnode -v 1 -n 2 -f meatware node2.magedu.com

[root@node1 ~]# ccs_tool addnode -v 1 -n 3 -f meatware node3.magedu.com

[root@node1 ~]# service cman start(第一次启动初始化时,最好使用工具system-config-cluster将组播地址改掉,不要与其它集群用相同的默认组播地址,否则会接收到其它集群传来的同步信息无法正常启动;或在启动前将node1的配置文件/etc/cluster/cluster.conf复制到其它node再启动)

[root@node1 ~]# clustat

node1.magedu.com                                                   1 Online, Local

 node2.magedu.com                                                   2 Online

 node3.magedu.com                                                   3 Online

 

node2-side

[root@node2 ~]# service cman start

 

node3-side

[root@node3 ~]# service cman start

 

3cLVM配置:

node1-side

[root@node1 ~]# rpm -ql lvm2-cluster

/etc/rc.d/init.d/clvmd

/usr/sbin/clvmd

/usr/sbin/lvmconf

[root@node1 ~]# vim /etc/lvm/lvm.conf(每个node都要改此配置文件)

locking_type = 3Type 3 uses built-in clustered locking,将此项1改为31表示本地基于文件锁Defaults to local file-based locking

 

node4-side

[root@node4 ~]# ha 'service clvmd start';done

 

node1-side

[root@node1 ~]# pvcreate /dev/sdb

 Writing physical volume data to disk "/dev/sdb"

 Physical volume "/dev/sdb" successfully created

[root@node1 ~]# pvs(在其它node也可看到)

 PV         VG         Fmt Attr PSize  PFree

 /dev/sdb              lvm2a--  10.00G 10.00G

[root@node1 ~]# vgcreate clustervg /dev/sdb

 Clustered volume group "clustervg" successfully created

[root@node1 ~]# vgs

 VG         #PV #LV #SN Attr   VSize VFree

 clustervg    1   0   0wz--nc 10.00G 10.00G

[root@node1 ~]# lvcreate -L 5G -n clusterlv clustervg

 Logical volume "clusterlv" created

[root@node1 ~]# lvs

 LV        VG         Attr  LSize  Origin Snap%  Move Log Copy%  Convert

 clusterlv clustervg  -wi-a-  5.00G    

 

4gfs2配置:

node1-side

[root@node1 ~]# rpm -ql gfs2-utils

/etc/rc.d/init.d/gfs2

/sbin/fsck.gfs2

/sbin/gfs2_convert

/sbin/gfs2_edit

/sbin/gfs2_fsck

/sbin/gfs2_grow

/sbin/gfs2_jadd

/sbin/gfs2_quota

/sbin/gfs2_tool

/sbin/mkfs.gfs2

/sbin/mount.gfs2

/sbin/umount.gfs2

[root@node1 ~]# mkfs.gfs2 -h

#mkfs.gfs2 OPTIONS  DEVICE

options

-b  #blocksize指定块大小,默认4096bytes

-DEnable debugging output

-j NUMBERThe number of journals for gfs2_mkfs to create,指定日志区域的个数,有几个node挂载使用就创建几个,默认创建1个)

-J  #The size of the journals in Megabytes,日志区域大小,默认128M

-p NAMELock ProtoName is the name of the locking  protocol to use,锁协议名,两种,通常使用lock_dlm,若是一个nodelock_nolock,若仅一个node使用单机FS即可,用不着集群FS

-t NAMEThe  lock table field appropriate to the lock module you’re using,锁表名称,格式为CLUSTERNAME:LOCKTABLENAMEclustername为当前node所在的集群名称,locktablename要在当前集群内唯一;一个集群内可以使用多个集群文件系统,锁表名称用于区分哪一个node在哪一个集群文件系统上所持有的锁)

 

[root@node1 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t tcluster:lktb1 /dev/clustervg/clusterlv(格式化集群文件系统会很慢)

This will destroy any data on/dev/clustervg/clusterlv.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/clustervg/clusterlv

Blocksize:                 4096

Device Size                5.00 GB (1310720 blocks)

Filesystem Size:           5.00 GB (1310718 blocks)

Journals:                  3

Resource Groups:           20

Locking Protocol:          "lock_dlm"

Lock Table:                "tcluster:lktb1"

UUID:                     D8B10B8F-7EE2-A818-E392-0DF218411F2C

 

[root@node1 ~]# mkdir /mydata

[root@node1 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

 

node2-side

[root@node2 ~]# mkdir /mydata

[root@node2 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[root@node2 ~]# ls /mydata

[root@node2 ~]# touch /mydata/b.txt

[root@node2 ~]# ls /mydata

b.txt

 

node3-side

[root@node3 ~]# mkdir /mydata

[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[root@node3 ~]# touch /mydata/c.txt

[root@node3 ~]# ls /mydata

b.txt c.txt

 

node1-side

[root@node1 ~]# ls /mydata

b.txt c.txt

 

注:每个nodeCFS的操作会立即同步到磁盘,并告知其它node,所以严重影响系统性能

 

5)调试:

[root@node1 ~]# gsf2_tool -hinterface to gfs2 ioctl/sysfs calls

#gfs2_tool df|journals|gettune|freeze|unfreeze|getargs  MOUNT_POINT

#gfs2_tool list

 

[root@node1 ~]# gfs2_tool listList the currently mounted GFS2 filesystems

253:2 tcluster:lktb1

 

[root@node1 ~]# gfs2_tool journals /mydatarint out information about the journals in a mounted filesystem

journal2 - 128MB

journal1 - 128MB

journal0 - 128MB

3 journal(s) found.

 

[root@node1 ~]# gfs2_tool df /mydata

/mydata:

  SBlock proto = "lock_dlm"

  SBlock table = "tcluster:lktb1"

  SBondisk format = 1801

  SBmultihost format = 1900

 Block size = 4096

 Journals = 3

 Resource Groups = 20

 Mounted lock proto = "lock_dlm"

 Mounted lock table = "tcluster:lktb1"

 Mounted host data = "jid=0:id=196609:first=1"

 Journal number = 0

 Lock module flags = 0

 Local flocks = FALSE

 Local caching = FALSE

 Type           Total Blocks   Used Blocks    Free Blocks    use%          

 ------------------------------------------------------------------------

 data           1310564        99293          1211271        8%

 inodes         1211294        23             1211271        0%

 

[root@node1 ~]# gfs2_tool freeze /mydataFreeze (quiesce)a GFS2 cluster,任意一个nodeCFS操作会卡住,直至unfreeze

 

[root@node1 ~]# gfs2_tool getargs /mydata

statfs_percent 0

data 2

suiddir 0

quota 0

posix_acl 0

upgrade 0

debug 0

localflocks 0

localcaching 0

ignore_local_fs 0

spectator 0

hostdata jid=0:id=196609:first=1

locktable

lockproto

 

[root@node1 ~]# gfs2_tool gettune /mydataPrint out the current values of the tuning parameters in a running filesystem,若要调整某一项,使用settune,并在挂载点后直接跟上指令和值,如#gfs2_tool settune /mydata new_files_directio=1

new_files_directio = 0

new_files_jdata = 0

quota_scale = 1.0000   (1, 1)

logd_secs = 1

recoverd_secs = 60

statfs_quantum = 30

stall_secs = 600

quota_cache_secs = 300

quota_simul_sync = 64

statfs_slow = 0

complain_secs = 10

max_readahead = 262144

quota_quantum = 60

quota_warn_period = 10

jindex_refresh_secs = 60

log_flush_secs = 60

incore_log_blocks = 1024

 

[root@node1 ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv(添加日志区域,1表示新增的个数(是新增个数不是总的节点数),若集群中node数增加了,可通过gfs2_jadd增加日志区域)

 

[root@node1 ~]# lvextend -L 8G /dev/clustervg/clusterlvextend the size of a logical volume

,扩展逻辑卷大小,可理解为扩展物理边界)

 Extending logical volume clusterlv to 8.00 GB

 Logical volume clusterlv successfully resized

[root@node1 ~]# gfs2_grow /dev/clustervg/clusterlvExpand a GFS2 filesystem,扩展集群文件系统,可理解为扩展逻辑边界,注意一定要执行此步骤,重要)

FS: Mount Point: /mydata

FS: Device:      /dev/mapper/clustervg-clusterlv

FS: Size:        1310718 (0x13fffe)

FS: RG size:     65533 (0xfffd)

DEV: Size:       2097152 (0x200000)

The file system grew by 3072MB.

Error fallocating extra space : File toolarge

gfs2_grow complete.

 

[root@node1 ~]# lvresize -L -3G /dev/clustervg/clusterlv(减小逻辑卷大小)

[root@node1 ~]# gfs2_grow /dev/clustervg/clusterlv

[root@node1 ~]# lvs

 LV        VG         Attr  LSize  Origin Snap%  Move Log Copy%  Convert

 clusterlv clustervg  -wi-ao  5.00G  

 

 

本文转自 chaijowin 51CTO博客,原文链接:http://blog.51cto.com/jowin/1726253,如需转载请自行联系原作者

相关实践学习
日志服务之使用Nginx模式采集日志
本文介绍如何通过日志服务控制台创建Nginx模式的Logtail配置快速采集Nginx日志并进行多维度分析。
相关文章
|
2月前
|
存储 缓存 运维
ISCSI详解(三)——ISCSI原理和架构
ISCSI详解(三)——ISCSI原理和架构
55 2
|
9月前
|
存储 Linux KVM
|
12月前
|
监控 Linux
NFS - Network FileSystem网络文件系统的实现原理
NFS - Network FileSystem网络文件系统的实现原理
110 0
|
存储 安全 Linux
nfs共享存储技术
nfs共享存储技术
480 0
|
存储 负载均衡 网络协议
NFS共享存储服务
1、NFS概述 2、在服务器使用NFS发布共享资源 3、在客户机中访问NFS共享资源
NFS共享存储服务
|
存储 数据安全/隐私保护 文件存储
|
存储 网络协议

热门文章

最新文章