V 4 drbd

本文涉及的产品
云数据库 RDS MySQL Serverless,0.5-2RCU 50GB
简介:

一、相关概念:

drbddistributed replicated block devicewww.linbit.com,通过网络将不同主机上磁盘中的分区做成镜像,块级别镜像,可理解为主机级别的RAID,默认只能通过一个node挂载,而备node不能挂载使用)

 

DASSCSI总线,支持7-16个设备,controller内置在主板上的芯片,adapter通过PCI插槽或PCI-E连接的卡,如RAID卡)

NASfile-server

SANSCSI报文借助FCTCP/IP隧道,可实现远距离传输)

在有些场景下,DASNASSAN都不适用,采用DRBD

 

RAID1mirror,两磁盘空间一样大,两磁盘内部之间是按位逐一对应起来,关键这两块磁盘要在同一主机

 

drbd:是内核功能,内核模块,类似ipvs,需要时载入内核并启用规则drbd能将两个主机上的两个磁盘分区做成镜像设备,通过网络将两磁盘分区按位对应起来

 

drbd有主从primary/secondary概念:

在某一时刻仅允许当前的primary-node挂载,有读写操作,而secondary绝不能挂载(也就是同一时刻两个node中一个为primary,另一个必定是secondary);

仅允许两个node(要么主从,要么双主);

作为主从资源可自动切换

 

drbd支持双主模型(dual master),两个node可同时挂载,前提要基于CFScluster filesystemDLMdistributed lock manager)完成(OCFS2GFS2);dual master对效率提升并没有帮助(不是并行读写,是阻止了另一node写),只是实现了多机可以使用同一个FS

 

在高可用集群场景中,使用pacemakerdrbd被定义为资源(drbd作为高可用集群服务),若使用双主模型FS要格式化为CFSDLM要定义为集群资源

 

wKiom1ZkD1riR7lCAAB8oOHezLc810.jpg

service<-->FS(用户空间进程向内核请求FS相关的系统调用,APIopen()read()write()等),通过内核的API输出给用户空间)

注:不同的FS,系统调用的名称会不一样,所接受的参数也会不一样,参数个数也不一样,通过VFS将不同的FS弥合起来

FS<-->buffer cache<-->disk scheduler(内核会在内存中找一段空间,用于buffer cache(缓存源数据、数据等),对文件的读写是在buffer cache中完成的;对于读,buffer cache中放的是文件的附本,这段内存若被清空,下次访问到该文件再读取即可;对于写,先存在buffer cache中,过一会系统会安排自动同步到磁盘上,若是机械硬盘要等到对应的磁道和扇区转过来才开始写,若有多个写操作通过disk scheduler将相邻的磁道或扇区上的操作进行合并和排序,从而提高性能)

注:disk scheduler,磁盘调度器,合并读请求,合并写请求,将多个IO合并为较少IO,对于机械式硬盘,将随机读写转为顺序读写,性能提升会非常明显

disk<-->driver(任何驱动一般都在内核中执行(至关重要),驱动完成具体的读写操作,驱动知道读或写的地址在哪(哪个磁道,哪个扇区))

drbd(可理解为过滤器,对于非drbd的操作它不管,不做任何处理)

 

三种数据同步协议(返回告知app已完成):

A模型(发至本地的TCP/IP协议栈时就返回成功,异步async,对于性能来说此模型靠谱,性能靠谱)

B模型(发至对方TCP/IP协议栈返回成功,半同步semi-sync

C模型(存储到对方磁盘后返回成功,同步sync,数据靠谱,默认使用C模型)

 

考虑到的问题?千兆网卡,加密传输

 

一个主机内只要不是相同的分区可以做多个drbd,各自都要定义好用哪个磁盘,通过哪个网络,要不要加密,占据多少带宽,如某个drbdprimary,secondary角色会随时转换,每个主机上都要运行app并监听在socket上(双方的port,使用的硬盘,使用的网卡要规划好)

 

定义一组drbd设备(drbd的资源),drbd resource包含四个属性:

resource name(可以是除空白字符外的任意ASCII码字符);

drbd device(在双方node上,此drbd设备的设备文件一般为/dev/drbdNUM,其主设备号为147,次设备号用于标识不同的设备,类似RAID/dev/md{0,1});

disk configuration(磁盘,在双方node上,各自提供的存储设备);

network configuration(双方数据同步时所使用的网络属性)

 

user space administration tools

drbdadmhigh-level,类似ipvsadm,从/etc/drbd.conf中读取配置,这个文件仅定义了要读取/etc/drbd.d/下的*.conf*.res所有文件)

drbdsetuplow-level(底层,选项和参数使用复杂),没drbdadm简单)

drbdmetalow-level,底层,操作drbd的源数据(与文件的源数据不是一回事),是维护drbd设备所用到的源数据,drbd的源数据可放在本地磁盘内(internal一般使用)也可放在另一磁盘)

 

 

二、操作:

环境redhat5.8 i386 2.6.18-308.el5,两个节点node1node2

准备软件包(http://mirrors.163.com/centos/5/extras/i386/RPMS/):

drbd83-8.3.15-2.el5.centos.i386.rpm(工具包)

kmod-drbd83-8.3.15-3.el5.centos.i686.rpm(内核模块)

注:内核2.6.33后才整合到内核中,centos提供了内核模块及用户空间的工具包

准备环境(参照集群第二篇heartbeatV2)有:时间同步、ssh双机互信、主机名、/etc/hosts文件、两节点都准备一个2G分区,不要格式化

 

node1-side

#scp /root/*.rpm  root@node2:/root/

#for I  in  {1..2};do ssh  node$I  ‘yum -y  --nogpgcheck  localinstall /root/*.rpm’;done

#rpm -ql  drbd83

/etc/drbd.conf

/etc/drbd.d/global_common.conf

/etc/ha.d/resource.d/drbddisk

/etc/ha.d/resource.d/drbdupper

/sbin/drbdadm

/sbin/drbdmeta

/sbin/drbdsetup

/usr/sbin/drbd-overview(显示drbd简要信息)

#cp /usr/share/doc/drbd83-8.3.15/drbd.conf  /etc/drbd.conf

#vim /etc/drbd.conf

include"drbd.d/global_common.conf";

include "drbd.d/*.res";*.res是定义的资源)

 

#vim /etc/drbd.d/global_common.conf

global {

         usage-count no;(主机若在互联网上,开启此项会自动发信息给原作者用于统计装机量等)

}

common {(提供默认属性定义,多个node中的资源若相同可放在common段)

         protocol C;(数据同步协议,ABC三种模型)

         handlers{(处理程序,若drbd故障如脑裂,要让某一方舍弃,通过脚本实现不同的策略,如谁写的少谁舍弃或谁刚成为主节点谁舍弃等策略)

                   pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f";

                   pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f";

                   local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ;halt -f";

         }

         startup{drbd设备刚启动时要同步,若当前主联系不上从,此处定义timeoutdegrade降级的timeout等)

                   #wfc-timeout 120;

# degr-wfc-timeout 120;

         }

         disk{(不同的drbddisk不同)

                   on-io-error detach;

                   #fencing resource-only;

         }

   net {

       cram-hmac-alg "sha1";(定义加密)

       shared-secret "mydrbd";

    }

   syncer {

       rate 1000M;

    }

}

 

#vim /etc/drbd.d/mydrbd.res

resource mydrbd  {

 device  /dev/drbd0;

  disk  /dev/sdb1;

 meta-disk  internal;

 on  node1.magedu.com  {

  address  192.168.41.129:7789;

  }

 on  node2.magedu.com  {

  address  192.168.41.130:7789;

}

}

 

#scp -r  /etc/drbd*  node2:/etc/

#drbdadm help

#drbddam create-md  mydrbd(初始化drbd,两个节点都要初始化)

 

node2-side

#drbdadm create-md  mydrbd

 

node1-side

[root@node1 ~]# cat  /proc/drbd(查看状态inconsistent为不一致状态)

version: 8.3.15 (api:88/proto:86-97)

GIT-hash:0ce4d235fc02b5c53c1c52c53433d11a694eab8c build bymockbuild@builder17.centos.org, 2013-03-27 16:04:08

 0:cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:2096348

[root@node1 ~]# drbd-overview(查看状态)

 0:mydrbd  Connected Secondary/Secondary Inconsistent/Inconsistent C r-----

[root@node1 ~]# drbdadm  --  --overwrite-data-of-peer  primary  mydrbd(将当前node设为主,注意该命令只在一个node上执行,也可用命令#drbdsetup  /dev/drbd0 primary  -o

[root@node1 ~]# drbd-overview(有了主从,且数据同步正在进行,也可用命令#watch  -n  1  ‘cat  /proc/drbd’实时查看状态)

 0:mydrbd  Sync Source Primary/Secondary UpToDate/Inconsistent C r---n-

         [==>.................]sync'ed: 16.3% (1760348/2096348)K

[root@node1 ~]# drbd-overview

 0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----

 

node2-side

[root@node2 ~]# drbd-overview(以下显示的Secondary/Primary,前面的属于自己的状态,后面的是另一node的状态)

 0:mydrbd  Connected Secondary/Primary UpToDate/UpToDate C r-----

 

node1-side

[root@node1 ~]# mke2fs -j /dev/drbd0

[root@node1 ~]# mkdir /mydata

[root@node1 ~]# mount /dev/drbd0 /mydata

[root@node1 ~]# ls /mydata

lost+found

[root@node1 ~]# cp /etc/issue /mydata

[root@node1 ~]# umount /mydata(注意:重要,将当前node转为从之前要先卸载FS

[root@node1 ~]# drbdadm secondary mydrbd(将当前node转为从)

[root@node1 ~]# drbd-overview

 0:mydrbd  Connected Secondary/Secondary UpToDate/UpToDate C r-----

 

node2-side

[root@node2 ~]# drbdadm primary mydrbd(将当前node转为主)

[root@node2 ~]# drbd-overview

 0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----

[root@node2 ~]# mkdir /mydata

[root@node2 ~]# mount /dev/drbd0 /mydata

[root@node2 ~]# ls /mydata

issue lost+found

 

 

 

 

=============================================================


 

drbddistributedreplicated block device,基于块设备在不同的HA server对之间同步和镜像数据的软件,通过它可实现在网络中的两台server之间基于块设备级别的实时同步复制或异步镜像,类似inotify+rsync这种架构项目的软件,inotify+rsync是在FS之上的实际文件的同步,而drbd是基于FS底层block级别同步,因此drbd效率更高,效果更好;

drbd refers to block devices designed as abuilding block to form high availability clusters,this is done by mirroring awhole block device via an assigned network.drbd can be understood as networkbased raid-1.

www.drbd.org

 

工作原理:

wKiom1etEfSgg2ToAABU4sh0TNc852.jpg

drbd软件工作在FS级别以下,比FS更靠近os kernelIO栈,在基于HA的两台server中,当数据写入到磁盘中时,数据还会实时的发送到网络中的另一台主机上,类似相同的形式记录在另一个磁盘系统中,使得本地(masternode)与远程主机(backup node)的数据保持实时同步,如果master node故障,那backup node上会保留与master node相同的数据可供继续使用,这样backup node可直接接管并继续提供服务,降低宕机修复时间提升用户体验;

drbd服务的作用类似磁盘阵列中的RAID1功能,相当于把网络中的两台server做成了RAID1,在HA中使用drbd可代替使用一个共享盘阵,因为数据在masterbackup上,发生failover时,backup node可直接使用数据(masterbackup的数据是一致的);

 

drbd同步模式:

实时同步模式(drbd的协议C级别,当数据写入到本地磁盘和远端所有server磁盘成功后才返回成功写入,此模式可阻止本地和远端的数据不一致或丢失,生产环境中常用此模式);

异步同步模式(drbd协议的A|B级别,当数据写入到本地server磁盘成功后就返回成功写入,或数据写入到远端的buffer成功后就返回成功,而不管远端server是否真正写入成功);

注:nfs也有类似的参数和功能,如syncasyncmount命令的参数也有;

 

drbd的生产应用模式:

主备模式(单主模式,典型的HA集群方案);

双主模式(复主模式,要采用CFS集群文件系统,如GFSOCFS2);

 

drbd3种同步复制协议:

协议A(异步复制协议,指数据写到本地磁盘上,并且复制的数据已经被放到本地的tcp缓冲区并等待发送以后,就认为写入完成,效率高,数据可能会丢失);

协议B(半同步复制协议,内存同步,指数据写到本地磁盘上,并且这些数据已经发送到对方内存缓冲区,对方的tcp已经收到数据,并宣布写入,如果双机掉电,数据可能丢失;此处的半同步与MySQL的半同步不同,MySQL的一主多从架构中的半同步是确保一主和一从同步成功,其它的从不管);

协议C(同步复制协议,指主node已写入,从node磁盘也写入,如果单机掉电或单机磁盘损坏,数据不会丢失);

 

注:若出问题,解决步骤:

检查iptablesselinuxdrbd配置文件,IP配置,主机路由配置等;

若两端都出现secondary/Unknown,最有可能是split-brain导致的,在backup node上运行如下命令:

#drbdadm secondary data

#drbdadm -- --discard-my-data connect data

#cat /proc/drbd

master node运行:

#drbdadm connect data

#cat /proc/data

 

drbd的企业应用场景:

heartbeat+drbd+nfs

heartbeat+drbd+MySQL

注:drbd可配合任意需要数据同步的所有服务的应用场景;drbdbackup node是不可见状态,非挂载模式,不能应用,这会浪费一台server资源,同一时间只能master node提供rw功能;drbd可配合MySQLOracleOracle8i-10g版本slave不能应用(如读功能),到11g版本slave可应用了,Oracle中的dataguard可实现physicallogical的备份方式

 

相关同步工具:

rsyncsersyncinotifylsyncd)、scpncnfsunion(双机同步)、csync2(多机同步)、软件的自身同步机制(MySQLOraclemongodbttserverredis)、drbd

 

注:

drbd状态信息:

0: cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r-----

   ns:4 nr:4 dw:8 dr:1039 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

[root@test-master ~]# drbdadm cstate data

Connected

cs (connection state). Status of the network connection. See Section 6.1.5, “Connectionstates”for details about the various connection states.

StandAlone. No network configuration available. The resource has not yet beenconnected, or has been administratively disconnected (using drbdadmdisconnect), or has dropped its connection due to failed authentication orsplit brain.

Disconnecting. Temporary state during disconnection. The next state isStandAlone.

Unconnected. Temporary state, prior to a connection attempt. Possible nextstates: WFConnection and WFReportParams.

Timeout. Temporary state following a timeout in the communication with thepeer. Next state: Unconnected.

BrokenPipe. Temporary state after the connection to the peer was lost. Nextstate: Unconnected.

NetworkFailure. Temporary state after the connection to the partner was lost. Nextstate: Unconnected.

ProtocolError. Temporary state after the connection to the partner was lost. Nextstate: Unconnected.

TearDown. Temporary state. The peer is closing the connection. Next state:Unconnected.

WFConnection. This node is waiting untilthe peer node becomes visible on the network.

WFReportParams. TCP connection has been established, this node waits for the firstnetwork packet from the peer.

Connected. A DRBD connection has been established, data mirroring is nowactive. This is the normal state.

StartingSyncS. Full synchronization, initiated by the administrator, is juststarting. The next possible states are: SyncSource or PausedSyncS.

StartingSyncT. Full synchronization, initiated by the administrator, is juststarting. Next state: WFSyncUUID.

WFBitMapS. Partial synchronization is just starting. Next possible states:SyncSource or PausedSyncS.

WFBitMapT. Partial synchronization is just starting. Next possible state:WFSyncUUID.

WFSyncUUID. Synchronization is about tobegin. Next possible states: SyncTarget or PausedSyncT.

SyncSource. Synchronization is currently running, with the local node beingthe source of synchronization.

SyncTarget. Synchronization is currently running, with the local node beingthe target of synchronization.

PausedSyncS. The local node is the source of an ongoing synchronization, butsynchronization is currently paused. This may be due to a dependency on thecompletion of another synchronization process, or due to synchronization havingbeen manually interrupted by drbdadm pause-sync.

PausedSyncT. The local node is the target of an ongoing synchronization, butsynchronization is currently paused. This may be due to a dependency on thecompletion of another synchronization process, or due to synchronization havingbeen manually interrupted by drbdadm pause-sync.

VerifyS. On-line device verification is currently running, with the localnode being the source of verification.

VerifyT. On-line device verification is currently running, with the localnode being the target of verification.

 

[root@test-master ~]# drbdadm role data

Primary/Secondary

ro (roles). Roles of the nodes. The role of the local node is displayed first,followed by the role of the partner node shown after the slash. See Section6.1.6, “Resource roles”for details about the possible resource roles.

Primary. The resource is currently in the primary role, and may be readfrom and written to. This role only occurs on one of the two nodes, unlessdual-primary mode is enabled.

Secondary. The resource is currently in the secondary role. It normallyreceives updates from its peer (unless running in disconnected mode), but mayneither be read from nor written to. This role may occur on one or both nodes.

Unknown. The resource’s role is currently unknown. The local resource rolenever has this status. It is only displayed for the peer’s resource role, andonly in disconnected mode.

 

[root@test-master ~]# drbdadm dstate data

UpToDate/UpToDate

ds (disk states). State of the hard disks. Prior to the slash the state of the localnode is displayed, after the slash the state of the hard disk of the partnernode is shown. See Section 6.1.7, “Disk states”for details about the variousdisk states.

Diskless. No local block device has been assigned to the DRBD driver. Thismay mean that the resource has never attached to its backing device, that ithas been manually detached using drbdadm detach, or that it automaticallydetached after a lower-level I/O error.

Attaching. Transient state while reading meta data.

Failed. Transient state following an I/O failure report by the local blockdevice. Next state: Diskless.

Negotiating. Transient state when an Attach is carried out on analready-Connected DRBD device.

Inconsistent. The data is inconsistent. This status occurs immediately uponcreation of a new resource, on both nodes (before the initial full sync). Also,this status is found in one node (the synchronization target) duringsynchronization.

Outdated. Resource data is consistent, but outdated.

DUnknown. This state is used for the peer disk if no network connection isavailable.

Consistent. Consistent data of a node without connection. When the connectionis established, it is decided whether the data is UpToDate or Outdated.

UpToDate. Consistent, up-to-date state of the data. This is the normalstate.

 

ns (network send). Volume of net data sent to the partner via the network connection;in Kibyte.

nr (network receive). Volume of net data received by the partner via the networkconnection; in Kibyte.

dw (disk write). Net data written on local hard disk; in Kibyte.

dr (disk read). Net data read from local hard disk; in Kibyte.

al (activity log). Number of updates of the activity log area of the meta data.

bm (bit map). Number of updates of the bitmap area of the meta data.

lo (local count). Number of open requests to the local I/O sub-system issued byDRBD.

pe (pending). Number of requests sent to the partner, but that have not yet beenanswered by the latter.

ua (unacknowledged). Number of requests received by the partner via the networkconnection, but that have not yet been answered.

ap (application pending). Number of block I/O requests forwarded to DRBD, but not yetanswered by DRBD.

ep (epochs). Number of epoch objects. Usually 1. Might increase under I/O loadwhen using either the barrier or the none write ordering method.

wo (write order). Currently used write ordering method: b(barrier), f(flush),d(drain) or n(none).

oos (out of sync). Amount of storage currently out of sync; in Kibibytes.

 

 

准备环境:

mastereth010.96.20.113)、eth1172.16.1.113,数据同步和传输心跳,不配网关及dns)、主机名(test-master

backupeth010.96.20.114)、eth1172.16.1.114,数据同步和传输心跳,不配网关及dns)、主机名(test-backup

每台主机均两块硬盘

 

分别配置主机名/etc/sysconfig/network结果一定要与uname -n保持一致,/etc/hosts文件,ssh双机互信,时间同步,iptablesselinux

 

test-master

[root@test-master ~]# fdisk -l

……

Disk /dev/sdb: 2147 MB, 2147483648 bytes

255 heads, 63 sectors/track, 261 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

Sector size (logical/physical): 512 bytes /512 bytes

I/O size (minimum/optimal): 512 bytes / 512bytes

Disk identifier: 0x00000000

[root@test-master ~]# parted /dev/sdb  #parted命令可支持大于2T的硬盘,将新硬盘分两个区,一个区用于放数据,另一个区用于drbdmeta data

GNU Parted 2.1

Using /dev/sdb

Welcome to GNU Parted! Type 'help' to viewa list of commands.

(parted) h                                                               

 align-check TYPE N                       check partition N for TYPE(min|opt) alignment

 check NUMBER                            do a simple check on the file system

  cp[FROM-DEVICE] FROM-NUMBER TO-NUMBER  copy file system to another partition

 help [COMMAND]                          print general help, or help on COMMAND

  mklabel,mktable LABEL-TYPE               create a new disklabel (partitiontable)

 mkfs NUMBER FS-TYPE                     make a FS-TYPE file system on partition NUMBER

  mkpart PART-TYPE [FS-TYPE] START END     make a partition

 mkpartfs PART-TYPE FS-TYPE START END    make a partition with a file system

 move NUMBER START END                   move partition NUMBER

 name NUMBER NAME                        name partition NUMBER as NAME

  print [devices|free|list,all|NUMBER]     display the partition table, availabledevices, free space, all found partitions, or a

       particular partition

 quit                                    exit program

 rescue START END                        rescue a lost partition near START and END

 resize NUMBER START END                 resize partition NUMBER and its file system

  rmNUMBER                               delete partition NUMBER

 select DEVICE                           choose the device to edit

  setNUMBER FLAG STATE                   change the FLAG on partition NUMBER

 toggle [NUMBER [FLAG]]                  toggle the state of FLAG on partition NUMBER

 unit UNIT                               set the default unit to UNIT

 version                                 display the version number and copyright information of GNU Parted

(parted) mklabel gpt                                                     

(parted) mkpart primary 0 1024

Warning: The resulting partition is notproperly aligned for best performance.

Ignore/Cancel?Ignore

(parted) mkpart primary 1025 2147                                         

Warning: The resulting partition is notproperly aligned for best performance.

Ignore/Cancel? Ignore

(parted) p                                                               

Model: VMware, VMware Virtual S (scsi)

Disk /dev/sdb: 2147MB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

 

Number Start   End     Size   File system  Name     Flags

 1     17.4kB  1024MB  1024MB               primary

 2     1025MB  2147MB  1122MB               primary

[root@test-master ~]# wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

[root@test-master ~]# rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm

warning:elrepo-release-6-6.el6.elrepo.noarch.rpm: Header V4 DSA/SHA1 Signature, key IDbaadae52: NOKEY

Preparing...               ########################################### [100%]

  1:elrepo-release        ########################################### [100%]

[root@test-master ~]# yum -y install drbd kmod-drbd84

[root@test-master ~]# modprobe drbd

FATAL: Module drbd not found.

[root@test-master ~]# yum -y install kernel*   #(更新内核后要重启系统)

[root@test-master ~]# uname -r

2.6.32-642.3.1.el6.x86_64

[root@test-master ~]# depmod

[root@test-master ~]# lsmod | grep drbd

drbd                  372759  0

libcrc32c               1246  1 drbd

[root@test-master ~]# ll /usr/src/kernels/

total 12

drwxr-xr-x. 22 root root 4096 Mar 31 06:462.6.32-431.el6.x86_64

drwxr-xr-x. 22 root root 4096 Aug  8 03:40 2.6.32-642.3.1.el6.x86_64

drwxr-xr-x. 22 root root 4096 Aug  8 03:40 2.6.32-642.3.1.el6.x86_64.debug

[root@test-master ~]# chkconfig drbd off

[root@test-master ~]# chkconfig --list drbd

drbd              0:off 1:off 2:off 3:off 4:off 5:off 6:off

[root@test-master ~]# echo "modprobedrbd > /dev/null 2>&1" > /etc/sysconfig/modules/drbd.modules

[root@test-master ~]# cat !$

cat /etc/sysconfig/modules/drbd.modules

modprobe drbd > /dev/null 2>&1

 

test-backup

[root@test-backup ~]# parted /dev/sdb

(parted) mklabel gpt

(parted) mkpart primary 0 4096                                           

Warning: The resulting partition is notproperly aligned for best performance.

Ignore/Cancel? Ignore                                                    

(parted) mkpart primary 4097 5368                                        

(parted) p                                                               

Model: VMware, VMware Virtual S (scsi)

Disk /dev/sdb: 5369MB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

 

Number Start   End     Size   File system  Name     Flags

 1     17.4kB  4096MB  4096MB               primary

 2     4097MB  5368MB  1271MB               primary

[root@test-backup ~]# wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

[root@test-backup ~]# rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm

[root@test-backup ~]# ll /etc/yum.repos.d/

total 20

-rw-r--r--. 1 root root 1856 Jul 19 00:28CentOS6-Base-163.repo

-rw-r--r--. 1 root root 2150 Feb  9  2014elrepo.repo

-rw-r--r--. 1 root root  957 Nov 4  2012 epel.repo

-rw-r--r--. 1 root root 1056 Nov  4  2012epel-testing.repo

-rw-r--r--. 1 root root  529 Mar 30 23:00 rhel-source.repo.bak

[root@test-backup ~]# yum -y install drbd kmod-drbd84

[root@test-backup ~]# yum -y install kernel*

[root@test-backup ~]# depmod

[root@test-backup ~]# lsmod | grep drbd

drbd                  372759  0

libcrc32c               1246  1 drbd

[root@test-backup ~]# chkconfig drbd off

[root@test-backup ~]# chkconfig --list drbd

drbd              0:off 1:off 2:off 3:off 4:off 5:off 6:off

[root@test-backup ~]# echo "modprobe drbd > /dev/null 2>&1" > /etc/sysconfig/modules/drbd.modules

[root@test-backup ~]# cat !$

cat /etc/sysconfig/modules/drbd.modules

modprobe drbd > /dev/null 2>&1

 

test-master

[root@test-master ~]# vim /etc/drbd.d/global_common.conf

[root@test-master ~]# egrep -v "#|^$" /etc/drbd.d/global_common.conf

global {

         usage-countno;

}

common {

         handlers{

         }

         startup{

         }

         options{

         }

         disk{

                on-io-error detach;

         }

         net{

         }

         syncer{

                   rate50M;

                   verify-algcrc32c;

         }

}

[root@test-master ~]# vim /etc/drbd.d/data.res

resource data {

       protocol C;

       on test-master {

                device  /dev/drbd0;

                disk    /dev/sdb1;

                address 172.16.1.113:7788;

                meta-disk       /dev/sdb2[0];

       }

       on test-backup {

                device  /dev/drbd0;

                disk    /dev/sdb1;

                address 172.16.1.114:7788;

                meta-disk       /dev/sdb2[0];

       }

}

[root@test-master ~]# cd /etc/drbd.d

[root@test-master drbd.d]# scp global_common.conf data.res root@test-backup:/etc/drbd.d/

global_common.conf                                                                                     100% 2144     2.1KB/s   00:00   

data.res                                                                                               100%  251     0.3KB/s  00:00   

 

[root@test-master drbd.d]# drbdadm --help

USAGE: drbdadm COMMAND [OPTION...]{all|RESOURCE...}

GENERAL OPTIONS:

 --stacked, -S

 --dry-run, -d

 --verbose, -v

 --config-file=..., -c ...

 --config-to-test=..., -t ...

 --drbdsetup=..., -s ...

 --drbdmeta=..., -m ...

 --drbd-proxy-ctl=..., -p ...

 --sh-varname=..., -n ...

 --peer=..., -P ...

 --version, -V

 --setup-option=..., -W ...

 --help, -h

 

COMMANDS:

 attach                             disk-options                      

 detach                             connect                           

 net-options                        disconnect                        

 up                                 resource-options                  

 down                               primary                           

 secondary                          invalidate                        

 invalidate-remote                  outdate                           

 resize                             verify                            

 pause-sync                         resume-sync                       

 adjust                            adjust-with-progress              

 wait-connect                       wait-con-int                      

 role                               cstate                            

 dstate                             dump                              

 dump-xml                           create-md                          

 show-gi                            get-gi                            

 dump-md                            wipe-md                           

 apply-al                           hidden-commands    

[root@test-master drbd.d]# drbdadm create-md data

initializing activity log

NOT initializing bitmap

Writing meta data...

New drbd meta data block successfullycreated.

[root@test-master drbd.d]# ssh test-backup 'drbdadm create-md data'

NOT initializing bitmap

initializing activity log

Writing meta data...

New drbd meta data block successfullycreated.

[root@test-master drbd.d]# drbdadm up data

[root@test-master drbd.d]# ssh test-backup 'drbdadm up data'

[root@test-master drbd.d]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:999984

[root@test-master drbd.d]# ssh test-backup 'cat /proc/drbd'

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49build by mockbuild@Build64R6, 2016-01-12 13:27:11

 0:cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:999984

[root@test-master drbd.d]# drbdadm -- --overwrite-data-of-peer primary data   #(仅在主上执行,会覆盖backup node的数据)

[root@test-master drbd.d]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-1213:27:11

 0:cs:SyncSource ro:Primary/Secondaryds:UpToDate/Inconsistent C r-----

   ns:339968 nr:0 dw:0 dr:340647 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:660016

         [=====>..............]sync'ed: 34.3% (660016/999984)K

         finish:0:00:15 speed: 42,496 (42,496) K/sec

[root@test-master drbd.d]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----

   ns:630784 nr:0 dw:0 dr:631463 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:369200

         [===========>........]sync'ed: 63.3% (369200/999984)K

         finish:0:00:09 speed: 39,424 (39,424) K/sec

[root@test-master drbd.d]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----

   ns:942080 nr:0 dw:0 dr:942759 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:57904

         [=================>..]sync'ed: 94.3% (57904/999984)K

         finish:0:00:01 speed: 39,196 (39,252) K/sec

[root@test-master drbd.d]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----

   ns:999983 nr:0 dw:0 dr:1000662 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0

[root@test-master drbd.d]# ssh test-backup 'cat /proc/drbd'

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:Connected ro:Secondary/Primaryds:UpToDate/UpToDate C r-----

   ns:0 nr:999983 dw:999983 dr:0 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0

[root@test-master drbd.d]# mkdir /drbd

[root@test-master drbd.d]# ssh test-backup 'mkdir /drbd'

[root@test-master drbd.d]# mkfs.ext4 -b 4096 /dev/drbd0   #(仅在主上执行,meta分区不要格式化)

Writing superblocks and filesystem accountinginformation: done

[root@test-master drbd.d]# tune2fs -c -1 /dev/drbd0

tune2fs 1.41.12 (17-May-2010)

Setting maximal mount count to -1

[root@test-master drbd.d]# mount /dev/drbd0 /drbd

[root@test-master drbd.d]# cd /drbd

[root@test-master drbd]# for i in `seq 1 10`; do touch test$i; done

[root@test-master drbd]# ls

lost+found test1  test10  test2 test3  test4  test5 test6  test7  test8 test9

[root@test-master drbd]# cd

[root@test-master ~]# umount /dev/drbd0

[root@test-master ~]# drbdadm secondary data

[root@test-master ~]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:Connected ro:Secondary/Secondaryds:UpToDate/UpToDate C r-----

    ns:1032538 nr:0 dw:32554 dr:1001751 al:19bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

 

test-backup

[root@test-backup ~]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-1213:27:11

 0:cs:Connected ro:Secondary/Secondaryds:UpToDate/UpToDate C r-----

   ns:0 nr:1032538 dw:1032538 dr:0 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0

[root@test-backup ~]# drbdadm primary data

[root@test-backup ~]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

 0:cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r-----

   ns:0 nr:1032538 dw:1032538 dr:679 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1wo:f oos:0

[root@test-backup ~]# mount /dev/drbd0 /drbd

[root@test-backup ~]# ls /drbd

lost+found test1  test10  test2 test3  test4  test5 test6  test7  test8 test9

 

 

本文转自 chaijowin 51CTO博客,原文链接:http://blog.51cto.com/jowin/1720094,如需转载请自行联系原作者

相关实践学习
基于CentOS快速搭建LAMP环境
本教程介绍如何搭建LAMP环境,其中LAMP分别代表Linux、Apache、MySQL和PHP。
全面了解阿里云能为你做什么
阿里云在全球各地部署高效节能的绿色数据中心,利用清洁计算为万物互联的新世界提供源源不断的能源动力,目前开服的区域包括中国(华北、华东、华南、香港)、新加坡、美国(美东、美西)、欧洲、中东、澳大利亚、日本。目前阿里云的产品涵盖弹性计算、数据库、存储与CDN、分析与搜索、云通信、网络、管理与监控、应用服务、互联网中间件、移动服务、视频服务等。通过本课程,来了解阿里云能够为你的业务带来哪些帮助 &nbsp; &nbsp; 相关的阿里云产品:云服务器ECS 云服务器 ECS(Elastic Compute Service)是一种弹性可伸缩的计算服务,助您降低 IT 成本,提升运维效率,使您更专注于核心业务创新。产品详情: https://www.aliyun.com/product/ecs
相关文章
|
API
DRBD常用管理篇
      在DRBD进入使用阶段之后,要经常查看它的工作状态,通过这些状态来判断DRBD运行情况。 1) 使用drbd-overview命令观察状态      最为简便的方式就是运行drbd-overview命令 # drb...
1410 0
|
存储 监控 关系型数据库
|
Web App开发 网络协议 Linux