接上一篇
DRBD+Corosync+Pacemaker实现MySQL高可用性群集(下)
四、corosync+pacemaker配置
1、安装将所需软件包
#node1
[root@node1 ~]# yum localinstall *.rpm -y
#node2
[root@node2 ~]# yum localinstall *.rpm -y
2、配置
# cd /etc/corosync/
# cp -p corosync.conf.example corosync.conf
#vim corosync.conf
3、node1、node2上分别创建日志目录
# mkdir /var/log/cluster
# ssh node2 "mkdir /var/log/cluster"
4、集群验证
[root@node1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey
//拷贝给node2要保留权限
[root@node1 corosync]# scp -p authkey node2:/etc/corosync/
5、启动服务
# service corosync start
6、验证corosync引擎是否正常启动了
[root@node1 corosync]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages
Jan 22 21:31:22 node1 smartd[4176]: Opened configuration file /etc/smartd.conf
Jan 22 21:31:22 node1 smartd[4176]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan 22 23:37:02 node1 corosync[2943]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
Jan 22 23:37:02 node1 corosync[2943]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
7、查看初始化成员节点通知是否发出
[root@node1 corosync]# grep -i totem /var/log/messages
8、检查过程中是否有错误产生
[root@node1 corosync]#grep -i error: /var/log/messages |grep -v unpack_resources (避免stonith的错误)
9、检查pacemaker的启动情况
[root@node1 corosync]# grep -i pcmk_startup /var/log/messages
五、配置群集服务及资源
1、corosync默认启用了stonith,而当前集群并没有相应的stonith设备,因此此默认配置目前尚不可用,这可以通过如下命令先禁用stonith:
# crm configure property stonith-enabled=false
2、对于双节点的集群来说,我们要配置此选项来忽略quorum,即这时候票数不起作用,一个节点也能正常运行:
# crm configure property no-quorum-policy=ignore
3、关闭两个节点上drbd服务,开机不自动启动
#servivce drbd stop && chkconfig drbd off
# ssh node2 "service drbd stop && chkconfig off"
4、配置drbd为集群资源
drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master,而另一个节点为Slave;因此,它是一种比较特殊的集群资源,其资源类型为多状态(Multi-state)clone类型,即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状态。
[root@node1 ~]# crm configure
crm(live)configure# primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" op monitor role="Master" interval="30s" op monitor role="Slave" interval="31s" op start timeout="240s" op stop timeout="100s"
#创建master类型的资源,将mysqldrbd加入
crm(live)configure# ms MS_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify="true"
crm(live)configure#exit
查看群集状态
5、我们实现将drbd设置自动挂载至/mysql目录。此外,此自动挂载的集群资源需要运行于drbd服务的Master节点上,并且只能在drbd服务将某节点设置为Primary以后方可启动。
确保两个节点上的设备已经卸载:
# umount /dev/drbd0
配置
# crm configure
crm(live)configure# primitive MysqlFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mysql" fstype="ext3" op start timeout=60s op stop timeout=60s
crm(live)configure# commit
crm(live)configure# exit
6、mysql资源的定义
# crm configure primitive myip ocf:heartbeat:IPaddr params ip=192.168.3.100
# crm configure primitive mysqlserver lsb:mysqld
#将ip与服务加入一个资源组中
# crm configure group mysql mysqlserver myip
7、配置资源约束
# crm configure
crm(live)configure# colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver
crm(live)configure# order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start
查看运行状态:
实现自动挂载
ip参数
六、测试
node1节点
# crm node standby //变为备份
资源已经都切换到了node2上:
客户机测试
授权192.168.3.0网段的用户可以访问
[root@host ~]#mysql
mysql> grant all on *.* to root@'192.168.3.%' identified by '123456';
mysql> flush privileges;
本文转自 刘园 51CTO博客,原文链接:http://blog.51cto.com/colynn/1126472