corosync+pacemaker+drbd实现mysql的高可用性

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
云数据库 RDS PostgreSQL,集群系列 2核4GB
简介:

由于drbd内核模块代码只在linux内核2.6.3.33以后的版本中才有,所以我们要同时安装内核模块和管理工具:

[root@node1 ~]# uname -r

2.6.18-164.el5

拓扑:

image

 

ip地址规划:

node1.a.com:192.168.1.4

node2.a.com:192.168.1.5

VIP:192.168.1.6

 

一:环境准备

Node1.a.com配置

1.地址配置

wps_clip_image-16967

2.修改主机名:

[root@node1 ~]# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node1.a.com

3.临时配置主机名

[root@node1 ~]# hostname node1.zzdx.com

4.同步时间

[root@node1 ~]# hwclock –s

5.修改hosts文件

[root@node1 ~]# vim /etc/hosts

5 192.168.1.4     node1.zzdx.com 
6 192.168.1.5     node2.zzdx.com

Node2.zzdx.com配置

1.地址配置

wps_clip_image-9957

2.修改主机名

[root@localhost ~]# vim /etc/sysconfig/network

  NETWORKING=yes

  NETWORKING_IPV6=no

  HOSTNAME=node2.zzdx.com

3.临时配置主机名

[root@localhost ~]# hostname node2.zzdx.com

4.同步时间

[root@node2 ~]# hwclock –s

5.修改hosts文件

[root@node1 ~]# vim /etc/hosts

5 192.168.1.4 node1.zzdx.com 
6 192.168.1.5 node2.zzdx.com

Node1和node2上配置无障碍通讯:

Node1:

[root@node1 ~]# ssh-keygen -t rsa

[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub node2.zzdx.com

Node2:

[root@node2 ~]# ssh-keygen -t rsa

[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub node1.zzdx.com

二:下载相关软件包

1.软件包在/root下

[root@node1 ~]# ll

total 162468

-rw-r--r-- 1 root root    271360 Oct 30 19:44 cluster-glue-1.0.6-1.6.el5.i386.rpm

-rw-r--r-- 1 root root    133254 Oct 30 19:44 cluster-glue-libs-1.0.6-1.6.el5.i386.rpm

-rw-r--r-- 1 root root    170052 Oct 30 19:44 corosync-1.2.7-1.1.el5.i386.rpm

-rw-r--r-- 1 root root    158502 Oct 30 19:44 corosynclib-1.2.7-1.1.el5.i386.rpm

-rw-r--r-- 1 root root    221868 Oct 30 19:46 drbd83-8.3.8-1.el5.centos.i386.rpm

-rw-r--r-- 1 root root    165591 Oct 30 19:44 heartbeat-3.0.3-2.3.el5.i386.rpm

-rw-r--r-- 1 root root    289600 Oct 30 19:44 heartbeat-libs-3.0.3-2.3.el5.i386.rpm

-rw-r--r-- 1 root root    125974 Oct 30 19:45 kmod-drbd83-8.3.8-1.el5.centos.i686.rpm

-rw-r--r-- 1 root root     60458 Oct 30 19:44 libesmtp-1.0.4-5.el5.i386.rpm

-rw-r--r-- 1 root root 162247449 Oct 30 19:47 mysql-5.5.15-linux2.6-i686.tar.gz

-rw-r--r-- 1 root root    207085 Oct 30 19:44 openais-1.1.3-1.6.el5.i386.rpm

-rw-r--r-- 1 root root     94614 Oct 30 19:45 openaislib-1.1.3-1.6.el5.i386.rpm

-rw-r--r-- 1 root root    796813 Oct 30 19:45 pacemaker-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root    207925 Oct 30 19:45 pacemaker-cts-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root    332026 Oct 30 19:45 pacemaker-libs-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root     32818 Oct 30 19:45 perl-TimeDate-1.16-5.el5.noarch.rpm

-rw-r--r-- 1 root root    388632 Oct 30 19:45 resource-agents-1.0.4-1.1.el5.i386.rpm

将软件包拷贝到node2节点

[root@node1 ~]# scp *.rpm  node2.zzdx.com:/root

[root@node1 ~]# scp mysql-5.5.15-linux2.6-i686.tar.gz node2.zzdx.com:/root/

2.编辑本地yum

[root@node1 ~]# vim /etc/yum.repos.d/rhel-debuginfo.repo

  1 [rhel-server]

  2 name=Red Hat Enterprise Linux server

  3 baseurl=file:///mnt/cdrom/Server/

  4 enabled=1

  5 gpgcheck=1

  6 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

  7 [rhel-Cluster]

  8 name=Red Hat Enterprise Linux Cluster

  9 baseurl=file:///mnt/cdrom/Cluster

10 enabled=1

11 gpgcheck=1

12 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

13 [rhel-ClusterStorage]

14 name=Red Hat Enterprise Linux ClusterStorage

15 baseurl=file:///mnt/cdrom/ClusterStorage

16 enabled=1

17 gpgcheck=1

18 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

将yum拷贝到node2

[root@node1 ~]# scp /etc/yum.repos.d/rhel-debuginfo.repo node2.zzdx.com:/etc/yum.repos.d/

创建挂载点

[root@node1 ~]# mkdir /mnt/cdrom

[root@node1 ~]# mount /dev/cdrom /mnt/cdrom/

[root@node1 ~]# ssh node2.zzdx.com 'mkdir /mnt/cdrom '

[root@node1 ~]# ssh node2.zzdx.com 'mount /dev/cdrom /mnt/cdrom '

三:新建磁盘分区

Node1:

[root@node1 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n    #新建分区

Command action

   e   extended

   p   primary partition (1-4)

P     #主分区

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +1G

Command (m for help): w   #保存退出

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

重新加载分区

[root@node1 ~]# partprobe /dev/sdb

[root@node1 ~]# cat /proc/partitions

 

Node2:

[root@node2 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +1G

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

重新加载分区:

[root@node2 ~]# partprobe /dev/sdb

[root@node2 ~]# cat /proc/partitions

四:安装配置drbd

1.安装drbd

Node1:

[root@node1 ~]# yum localinstall -y drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm --nogpgcheck

[root@node2 ~]# yum localinstall drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm -y --nogpgcheck

2.加载drbd模块:

Node1:

[root@node1 ~]# modprobe drbd

[root@node1 ~]# lsmod |grep drbd

Node2:

[root@node2 ~]# modprobe drbd

[root@node2 ~]# lsmod |grep drbd

3.修该配置文件

[root@node1 ~]# cp -p /usr/share/doc/drbd83-8.3.8/drbd.conf /etc/

[root@node1 ~]# cd /etc/drbd.d/

[root@node1 drbd.d]# cp -p global_common.conf global_common.conf.bak

[root@node1 drbd.d]# vim global_common.conf

  1 global {

  2         usage-count yes;

  3         # minor-count dialog-refresh disable-ip-verification

  4 }

  5

  6 common {

  7         protocol C;

  8

  9         handlers {

11                 pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.    sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger     ; reboot -f";

12                 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib    /drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f    ";

13         }

14

15         startup {

16                 wfc-timeout 120;

17                 degr-wfc-timeout 100;

18         }

19

20         disk {

21                 on-io-error detach;

22         }

23

24         net {

25                 cram-hmac-alg "sha1";

26                 shared-secret "mydrbd123";

27         }

28

29         syncer {

30                 rate 100M;

31         }

32 }

[root@node1 drbd.d]# vim /etc/drbd.d/mysql.res

  1 resource mysql {

  2 on node1.zzdx.com {

  3 device /dev/drbd0;

  4 disk /dev/sdb1;

  5 address 192.168.1.4:7898;

  6 meta-disk internal;

  7 }

  8 on node2.zzdx.com {

  9 device /dev/drbd0;

10 disk /dev/sdb1;

11 address 192.168.1.5:7898;

12 meta-disk internal;

13 }

14 }

 

将文件复制到node2上:

[root@node1 drbd.d]# scp /etc/drbd.conf node2.zzdx.com:/etc/

[root@node1 drbd.d]# scp /etc/drbd.d/* node2.zzdx.com:/etc/drbd.d/

4.检测配置文件,创建nfs资源

//分别在node1和node2上初始化定义的mysql的资源  
//检测配置文件(两次执行如下命令)

Node1:

[root@node1 drbd.d]# drbdadm adjust mysql

  --==  Thank you for participating in the global usage survey  ==--

The server's response is:

0: Failure: (119) No valid meta-data signature found.

==> Use 'drbdadm create-md res' to initialize meta-data area. <==

Command 'drbdsetup 0 disk /dev/sdb1 /dev/sdb1 internal --set-defaults --create-device --on-io-error=detach' terminated with exit code 10

[root@node1 drbd.d]# drbdadm adjust mysql

drbdsetup 0 show:5: delay-probe-volume 0k => 0k out of range [4..1048576]k.

[root@node1 drbd.d]# drbdadm create-md mysql

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

 

Node2:

[root@node2 ~]#  drbdadm create-md mysql

  --==  Thank you for participating in the global usage survey  ==--

The server's response is:

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

[root@node2 ~]#  ll /dev/drbd0

brw-rw---- 1 root root 147, 0 Oct 30 20:17 /dev/drbd0

5.启动drbd服务

Node1:

[root@node1 drbd.d]# service drbd start

Node2:

[root@node2 ~]# service drbd start

6.查看drbd状态

Node1:

[root@node1 ~]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:16

m:res    cs         ro                   ds                         p  mounted  fstype

0:mysql  Connected  Secondary/Secondary  Inconsistent/Inconsistent  C

[root@node1 ~]# drbd-overview

0:mysql  Connected Secondary/Secondary Inconsistent/Inconsistent C r----

Node2:

[root@node2 ~]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:16

m:res    cs         ro                   ds                         p  mounted  fstype

0:mysql  Connected  Secondary/Secondary  Inconsistent/Inconsistent  C

[root@node2 ~]# drbd-overview

  0:mysql  Connected Secondary/Secondary Inconsistent/Inconsistent C r----

7.设置drbd的主节点

从上面的信息中可以看出此时两个节点均处于Secondary状态。于是,我们接下来需要将其中一个节点设置为Primary,这里将node1设置为主节点,故要在node1上执行如下命令:可以看到文件同步过程。

Node1:

[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary mysql

[root@node1 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

Node2:

[root@node2 ~]# drbd-overview

  0:mysql  Connected Secondary/Primary UpToDate/UpToDate C r----

8.创建文件系统(只可以在primary节点上进行,这里node1是primary节点)

[root@node1 ~]# mkfs -t ext3 /dev/drbd0

[root@node1 ~]# mkdir -pv /mnt/mysqldata

[root@node1 ~]# mount /dev/drbd0 /mnt/mysqldata/

[root@node1 ~]# cd /mnt/mysqldata/

[root@node1 mysqldata]# echo "123">f1

[root@node1 mysqldata]# cd

[root@node1 ~]# umount /mnt/mysqldata/

[root@node1 ~]# drbdadm secondary mysql

[root@node1 ~]# drbd-overview

  0:mysql  Connected Secondary/Secondary UpToDate/UpToDate C r----

9.将node2设置为primary节点

[root@node2 ~]# mkdir -pv /mnt/mysqldata

[root@node2 ~]# drbdadm primary mysql

[root@node2 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

[root@node2 ~]# mount /dev/drbd0 /mnt/mysqldata/

[root@node2 ~]# cd /mnt/mysqldata/

[root@node2 mysqldata]# ll

total 20

-rw-r--r-- 1 root root     4 Oct 30  2012 f1

drwx------ 2 root root 16384 Oct 30  2012 lost+found

[root@node2 mysqldata]# cd

[root@node2 ~]# umount /mnt/mysqldata/

至此,drbd创建成功!!!

五:mysql的安装与配置

1.将node1设置为primary节点

Node2:

[root@node2 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

[root@node2 ~]# drbdadm secondary mysql

Node1:

[root@node1 ~]#  drbdadm primary mysql

[root@node1 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

[root@node1 ~]# mount /dev/drbd0 /mnt/mysqldata/

2.在node1上安装mysql

[root@node1 ~]# groupadd -r mysql

[root@node1 ~]# useradd -g mysql -r mysql

[root@node1 ~]# mkdir -pv /mnt/mysqldata/data

[root@node1 ~]# chown -R mysql.mysql /mnt/mysqldata/data/

[root@node1 ~]# ll /mnt/mysqldata/

total 24

drwxr-xr-x 2 mysql mysql  4096 Oct 30 21:33 data

-rw-r--r-- 1 root  root      4 Oct 30 21:20 f1

drwx------ 2 root  root  16384 Oct 30 21:19 lost+found

[root@node1 ~]# tar -zxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/

[root@node1 ~]# cd /usr/local/

[root@node1 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql

[root@node1 local]# cd mysql

[root@node1 mysql]# chown -R mysql:mysql .

[root@node1 mysql]# scripts/mysql_install_db --user=mysql --datadir=/mnt/mysqldata/data/

[root@node1 mysql]# chown -R root .

[root@node1 mysql]# cp support-files/my-large.cnf /etc/my.cnf

[root@node1 mysql]# vim /etc/my.cnf

39 thread_concurrency = 2

40 datadir = /mnt/mysqldata/data/

[root@node1 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld

[root@node1 mysql]# scp /etc/my.cnf node2.zzdx.com:/etc/

[root@node1 mysql]# scp /etc/rc.d/init.d/mysqld node2.zzdx.com:/etc/rc.d/init.d/

[root@node1 mysql]# chkconfig --add mysqld

[root@node1 mysql]# chkconfig mysqld off

[root@node1 mysql]# chkconfig --list mysqld

mysqld          0:off 1:off 2:off 3:off 4:off 5:off 6:off

[root@node1 mysql]# service mysqld start

[root@node1 mysql]# ll /mnt/mysqldata/data/

total 28748

-rw-rw---- 1 mysql mysql  5242880 Oct 30 21:43 ib_logfile0

-rw-rw---- 1 mysql mysql  5242880 Oct 30 21:43 ib_logfile1

-rw-rw---- 1 mysql mysql 18874368 Oct 30 21:43 ibdata1

drwx------ 2 mysql root      4096 Oct 30 21:36 mysql

-rw-rw---- 1 mysql mysql      107 Oct 30 21:43 mysql-bin.000001

-rw-rw---- 1 mysql mysql       19 Oct 30 21:43 mysql-bin.index

-rw-rw---- 1 mysql root      1703 Oct 30 21:43 node1.zzdx.com.err

-rw-rw---- 1 mysql mysql        5 Oct 30 21:43 node1.zzdx.com.pid

drwx------ 2 mysql mysql     4096 Oct 30 21:36 performance_schema

drwx------ 2 mysql root      4096 Oct 30 21:36 test

[root@node1 mysql]# service mysqld stop

为了使用mysql的安装符合系统使用规范,并将其开发组件导出给系统使用,这里还需要进行如下步骤:输出mysql的man手册至man命令的查找路径:添加如下行即可:

[root@node1 mysql]# vim /etc/man.config

 48 MANPATH /usr/local/mysql/man

输出mysql的头文件至系统头文件路径/usr/include,这可以通过简单的创建链接实现:

[root@node1 mysql]# ln -sv /usr/local/mysql/include /usr/include/mysql

输出mysql的库文件给系统库查找路径:(文件只要是在/etc/ld.so.conf.d/下并且后缀是.conf就可以)而后让系统重新载入系统库

[root@node1 mysql]# echo '/usr/local/mysql/lib' >> /etc/ld.so.conf.d/mysql.conf

[root@node1 mysql]# ldconfig -v |grep mysql

/usr/local/mysql/lib:

libmysqlclient.so.18 -> libmysqlclient_r.so.18.0.0

修改PATH环境变量,让系统所有用户可以直接使用mysql的相关命令:

[root@node1 mysql]# vim /etc/profile

 58 PATH=$PATH:/usr/local/mysql/bin

[root@node1 mysql]#  . /etc/profile

[root@node1 mysql]# echo $PATH

/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin:/usr/local/mysql/bin

[root@node1 mysql]# umount /mnt/mysqldata/

3.将node2设置为primary节点,node1设置为secondary节点

Node1:

[root@node1 mysql]# drbdadm secondary mysql

[root@node1 mysql]# drbd-overview

  0:mysql  Connected Secondary/Secondary UpToDate/UpToDate C r----

Node2:

[root@node2 ~]# drbdadm primary mysql

[root@node2 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

4.在node2上安装mysql

[root@node2 ~]# groupadd -r mysql

[root@node2 ~]# useradd -g mysql -r mysql

[root@node2 ~]# mount /dev/drbd0 /mnt/mysqldata/

[root@node2 ~]# ll /mnt/mysqldata/

total 24

drwxr-xr-x 5 mysql mysql  4096 Oct 30  2012 data

-rw-r--r-- 1 root  root      4 Oct 30 21:20 f1

drwx------ 2 root  root  16384 Oct 30 21:19 lost+found

[root@node2 ~]# tar -zxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/

[root@node2 ~]# cd /usr/local/

[root@node2 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql

[root@node2 local]# cd mysql

一定不能对数据库进行初始化,因为我们在node1上已经初始化了:

[root@node2 mysql]# chown -R root:mysql .

mysql主配置文件和sysc服务脚本已经从node1复制过来了,不用在添加。

[root@node2 mysql]# chkconfig --add mysqld

[root@node2 mysql]# chkconfig mysqld off

[root@node2 mysql]# chkconfig --list mysqld

mysqld          0:off 1:off 2:off 3:off 4:off 5:off 6:off

[root@node2 mysql]# service mysqld start

Starting MySQL....                                         [  OK  ]

[root@node2 mysql]# ll /mnt/mysqldata/data/

total 28756

-rw-rw---- 1 mysql mysql  5242880 Oct 30 21:45 ib_logfile0

-rw-rw---- 1 mysql mysql  5242880 Oct 30 21:43 ib_logfile1

-rw-rw---- 1 mysql mysql 18874368 Oct 30 21:44 ibdata1

drwx------ 2 mysql root      4096 Oct 30 21:36 mysql

-rw-rw---- 1 mysql mysql      126 Oct 30 21:44 mysql-bin.000001

-rw-rw---- 1 mysql mysql      107 Oct 30 21:45 mysql-bin.000002

-rw-rw---- 1 mysql mysql       38 Oct 30 21:45 mysql-bin.index

-rw-rw---- 1 mysql root      2125 Oct 30 21:44 node1.zzdx.com.err

-rw-rw---- 1 mysql root       941 Oct 30 21:45 node2.zzdx.com.err

-rw-rw---- 1 mysql mysql        5 Oct 30 21:45 node2.zzdx.com.pid

drwx------ 2 mysql mysql     4096 Oct 30 21:36 performance_schema

drwx------ 2 mysql root      4096 Oct 30 21:36 test

[root@node2 mysql]# service mysqld stop      #测试mysql后关闭

为了使用mysql的安装符合系统使用规范,并将其开发组件导出给系统使用,这里还需要进行如下步骤: 输出mysql的man手册至man命令的查找路径:添加如下行即可:

[root@node2 mysql]# vim /etc/man.config

 48 MANPATH /usr/local/mysql/man

输出mysql的头文件至系统头文件路径/usr/include,这可以通过简单的创建链接实现:

[root@node2 mysql]# ln -sv /usr/local/mysql/include /usr/include/mysql

输出mysql的库文件给系统库查找路径:(文件只要是在/etc/ld.so.conf.d/下并且后缀是.conf就可以)而后让系统重新载入系统库

[root@node2 mysql]# echo '/usr/local/mysql/lib' >> /etc/ld.so.conf.d/mysql.conf

[root@node2 mysql]# ldconfig -v |grep mysql

/usr/local/mysql/lib:

libmysqlclient.so.18 -> libmysqlclient_r.so.18.0.0

修改PATH环境变量,让系统所有用户可以直接使用mysql的相关命令:

[root@node2 mysql]#  vim /etc/profile

 59 PATH=$PATH:/usr/local/mysql/bin

[root@node2 mysql]#  . /etc/profile

[root@node2 mysql]# echo $PATH

/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin

[root@node2 mysql]# umount /mnt/mysqldata/

六:corosync+pacemaker的安装和配置

1.安装软件包

Node1:

[root@node1 mysql]# cd

[root@node1 ~]# yum install -y *.rpm --nogpgcheck

Node2:

[root@node2 mysql]# cd

[root@node2 ~]# yum install -y *.rpm --nogpgcheck

2.对node1和node2进行配置

[root@node1 ~]# cd /etc/corosync/

[root@node1 corosync]# cp corosync.conf.example corosync.conf   #生成配置文件

[root@node1 corosync]# vim corosync.conf

10                 bindnetaddr: 192.168.1.0

33 service {

34 ver: 0

35 name: pacemaker

36 use_mgmtd: yes

37 }

38 aisexec {

39 user: root

40 group: root

41 }

[root@node1 corosync]# mkdir -pv /var/log/cluster

[root@node1 corosync]# corosync-keygen

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Writing corosync key to /etc/corosync/authkey.

将文件拷贝到node2上(拷贝时一定要用-p):

[root@node1 corosync]# scp -p authkey corosync.conf node2.zzdx.com:/etc/corosync/

[root@node1 corosync]# ssh node2.zzdx.com  'mkdir -pv /var/log/cluster'

3.在节点node1和node2上进行检测

Node1:

1:在node1和node2节点上面启动 corosync 的服务

[root@node1 corosync]# service corosync start

Starting Corosync Cluster Engine (corosync):               [  OK  ]

[root@node2 corosync]# service corosync start

Starting Corosync Cluster Engine (corosync):               [  OK  ]

2:在node1上验证corosync引擎是否正常启动了

[root@node1 corosync]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Oct 30 23:37:33 node1 corosync[1317]:   [MAIN  ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

Oct 30 23:37:33 node1 corosync[1317]:   [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'

3: 在node1上查看初始化成员节点通知是否发出

[root@node1 corosync]# grep -i totem /var/log/messages

Oct 30 23:37:33 node1 corosync[1317]:   [TOTEM ] Initializing transport (UDP/IP).

Oct 30 23:37:33 node1 corosync[1317]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Oct 30 23:37:34 node1 corosync[1317]:   [TOTEM ] The network interface [192.168.1.4] is now up.

Oct 30 23:37:34 node1 corosync[1317]:   [TOTEM ] Process pause detected for 524 ms, flushing membership messages.

Oct 30 23:37:34 node1 corosync[1317]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.

Oct 30 23:38:40 node1 corosync[1317]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.

4: 在node1上检查过程中是否有错误产生(避免stonith的错误)

[root@node1 corosync]#  grep -i error: /var/log/messages |grep -v unpack_resources

出现如下1个错误: 
Feb 7 22:51:43 node1 corosync[5149]: [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=5161, rc=100)此处 
解决方法: 
仔细看了/var/log/messages日志,或者使用crm_verify -L检查一下错误,其实没必要卸载重装。这个错误是由于缺少snoith设备引起的,并不会影响corosync的运行。可以忽略这个错误。

[root@node1 corosync]# crm_verify -L

crm_verify[1359]: 2012/10/30_23:43:28 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined

crm_verify[1359]: 2012/10/30_23:43:28 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option

crm_verify[1359]: 2012/10/30_23:43:28 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity

Errors found during check: config not valid

  -V may provide more details

5: 在node1上检查pacemaker时候已经启动了?(如下显示正常启动)

[root@node1 corosync]# grep -i pcmk_startup /var/log/messages

Oct 30 23:37:34 node1 corosync[1317]:   [pcmk  ] info: pcmk_startup: CRM: Initialized

Oct 30 23:37:34 node1 corosync[1317]:   [pcmk  ] Logging: Initialized pcmk_startup

Oct 30 23:37:34 node1 corosync[1317]:   [pcmk  ] info: pcmk_startup: Maximum core file size is: 4294967295

Oct 30 23:37:34 node1 corosync[1317]:   [pcmk  ] info: pcmk_startup: Service: 9

Oct 30 23:37:34 node1 corosync[1317]:   [pcmk  ] info: pcmk_startup: Local hostname: node1.zzdx.com

 

Node2:

1: 在node2上验证corosync引擎是否正常启动了

[root@node2 corosync]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Oct 30 23:27:32 node2 corosync[1242]:   [MAIN  ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

Oct 30 23:27:32 node2 corosync[1242]:   [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'

2: 在node2上查看初始化成员节点通知是否发出

[root@node2 corosync]#  grep -i totem /var/log/messages

Oct 30 23:27:32 node2 corosync[1242]:   [TOTEM ] Initializing transport (UDP/IP).

Oct 30 23:27:32 node2 corosync[1242]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Oct 30 23:27:32 node2 corosync[1242]:   [TOTEM ] The network interface [192.168.1.5] is now up.

Oct 30 23:27:33 node2 corosync[1242]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.

Oct 30 23:27:33 node2 corosync[1242]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.

3: 在node2上检查过程中是否有错误产生(避免stonith的错误,如下显示只有stonith错误,可忽略)

[root@node2 corosync]# grep -i error: /var/log/messages |grep -v unpack_resources

Oct 30 23:27:33 node2 corosync[1242]:   [pcmk  ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=1254, rc=100

4: 在node2上检查pacemaker时候已经启动了

[root@node2 corosync]# grep -i pcmk_startup /var/log/messages

Oct 30 23:27:32 node2 corosync[1242]:   [pcmk  ] info: pcmk_startup: CRM: Initialized

Oct 30 23:27:32 node2 corosync[1242]:   [pcmk  ] Logging: Initialized pcmk_startup

Oct 30 23:27:32 node2 corosync[1242]:   [pcmk  ] info: pcmk_startup: Maximum core file size is: 4294967295

Oct 30 23:27:32 node2 corosync[1242]:   [pcmk  ] info: pcmk_startup: Service: 9

Oct 30 23:27:32 node2 corosync[1242]:   [pcmk  ] info: pcmk_startup: Local hostname: node2.zzdx.com

5.在node1和node2上检测群集状态

Node1:

[root@node1 corosync]#  crm status

============

Last updated: Tue Oct 30 23:49:33 2012

Stack: openais

Current DC: node1.zzdx.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

0 Resources configured.

============

Online: [ node1.zzdx.com node2.zzdx.com ]

 

Node2:

[root@node2 corosync]# crm status

============

Last updated: Tue Oct 30 23:38:57 2012

Stack: openais

Current DC: node1.zzdx.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

0 Resources configured.

============

Online: [ node1.zzdx.com node2.zzdx.com ]

七:群集管理

1.配置群集的工作属性

corosync默认启用了stonith,而当前集群并没有相应的stonith设备,因此此默认配置目前尚不可用,这可以通过如下命令先禁用stonith:

Node1:

[root@node1 corosync]# cd

[root@node1 ~]# crm configure property stonith-enabled=false

Node2:

[root@node2 corosync]# cd

[root@node2 ~]# crm configure property stonith-enabled=false

对于双节点的集群来说,我们要配置此选项来忽略quorum,即这时候票数不起作用,一个节点也能正常运行

Node1:

[root@node1 ~]# crm configure property no-quorum-policy=ignore

Node2:

[root@node2 ~]# crm configure property no-quorum-policy=ignore

定义资源的粘性值,使资源不能再节点之间随意的切换,因为这样是非常浪费系统的资源的。

资源黏性值范围及其作用:

0:这是默认选项。资源放置在系统中的最适合位置。这意味着当负载能力“较好”或较差的节点变得可用时才转移资源。此选项的作用基本等同于自动故障回复,只是资源可能会转移到非之前活动的节点上;

大于0:资源更愿意留在当前位置,但是如果有更合适的节点可用时会移动。值越高表示资源越愿意留在当前位置;

小于0:资源更愿意移离当前位置。绝对值越高表示资源越愿意离开当前位置;

INFINITY:如果不是因节点不适合运行资源(节点关机、节点待机、达到migration-threshold 或配置更改)而强制资源转移,资源总是留在当前位置。此选项的作用几乎等同于完全禁用自动故障回复;

-INFINITY:资源总是移离当前位置;

我们这里可以通过以下方式为资源指定默认黏性值:

Node1:

[root@node1 ~]# crm configure rsc_defaults resource-stickiness=100

Node2:

[root@node2 ~]# crm configure rsc_defaults resource-stickiness=100

2.查看群集的状态,将node1设置为primary

Node1:

[root@node1 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

Node2:

[root@node2 ~]# drbd-overview

  0:mysql  Connected Secondary/Primary UpToDate/UpToDate C r----

3.配置drbd的群集资源

1.查看当前群集的配置信息,确保已经配置全局属性参数为两节点群集所使用

[root@node1 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

[root@node1 ~]# crm configure show

node node1.zzdx.com

node node2.zzdx.com

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100"

2.将drbd设置为群集资源

[root@node1 ~]# service drbd stop  

Stopping all DRBD resources: .

[root@node1 ~]# chkconfig drbd off

[root@node1 ~]# ssh node2.zzdx.com 'service drbd stop'

Stopping all DRBD resources: .

[root@node1 ~]# ssh node2.zzdx.com  'chkconfig drbd off'

[root@node1 ~]# drbd-overview

drbd not loaded

[root@node1 ~]# ssh node2.zzdx.com 'drbd-overview'

drbd not loaded

提供drbd的RA目前由OCF归类为linbit,其路径为/usr/lib/ocf/resource.d/linbit/drbd。我们可以使用如下命令来查看此RA及RA的meta信息:

[root@node1 ~]# crm ra classes

heartbeat

lsb

ocf / heartbeat linbit pacemaker

stonith

[root@node1 ~]# crm ra list ocf linbit

drbd

查看drbd的资源代理的相关信息:

[root@node1 ~]# crm ra info ocf:linbit:drbd

This resource agent manages a DRBD resource

as a master/slave resource. DRBD is a shared-nothing replicated storage

device. (ocf:linbit:drbd)

Master/Slave OCF Resource Agent for DRBD

Parameters (* denotes required, [] the default):

drbd_resource* (string): drbd resource name

    The name of the drbd resource from the drbd.conf file.

drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf

    Full path to the drbd.conf file.

Operations' defaults (advisory minimum):

    start         timeout=240

    promote       timeout=90

    demote        timeout=90

    notify        timeout=90

    stop          timeout=100

    monitor_Slave interval=20 timeout=20 start-delay=1m

monitor_Master interval=10 timeout=20 start-delay=1m

drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master,而另一个节点为Slave;因此,它是一种比较特殊的集群资源,其资源类型为多状态(Multi-state)clone类型,即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状态。

[root@node1 ~]# crm

crm(live)# configure

crm(live)configure#  primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" op monitor role="Master" interval="30s"  op monitor role="Slave" interval="31s" op start timeout="240s" op stop timeout="100s"

crm(live)configure# ms MS_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify="true"

crm(live)configure#  verify

crm(live)configure# commit

crm(live)configure# exit

Bye

查看当前集群运行状态:

[root@node1 ~]# crm status

============

Last updated: Wed Oct 31 00:21:56 2012

Stack: openais

Current DC: node1.zzdx.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

1 Resources configured.

============

Online: [ node1.zzdx.com node2.zzdx.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

     Masters: [ node1.zzdx.com ]

     Slaves: [ node2.zzdx.com ]

由上面的信息可以看出此时的drbd服务的Primary节点为node1.junjie.com,Secondary节点为node2.junjie.com。当然,也可以在node1上使用如下命令验正当前主机是否已经成为mysql资源的Primary节点

[root@node1 ~]# drbdadm role mysql

Primary/Secondary

[root@node1 ~]# drbd-overview

  0:mysql  Connected Primary/Secondary UpToDate/UpToDate C r----

我们实现将drbd设置自动挂载至/mysqldata目录。此外,此自动挂载的集群资源需要运行于drbd服务的Master节点上,并且只能在drbd服务将某节点设置为Primary以后方可启动。

确保两个节点上的设备已经卸载

[root@node1 ~]# umount /dev/drbd0

umount: /dev/drbd0: not mounted

[root@node2 ~]# umount /dev/drbd0

umount: /dev/drbd0: not mounted

以下还在node1上操作:

[root@node1 ~]# crm

crm(live)# configure

crm(live)configure# primitive MysqlFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mnt/mysqldata" fstype="ext3"  op start timeout=60s op stop timeout=60s

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# exit

Bye

4.定义mysql资源

[root@node1 ~]# crm configure primitive myip ocf:heartbeat:IPaddr params ip=192.168.1.6

[root@node1 ~]#  crm configure primitive mysqlserver lsb:mysqld

5.配置资源的各种约束

集群拥有所有必需资源,但它可能还无法进行正确处理。资源约束则用以指定在哪些群集节点上运行资源,以何种顺序装载资源,以及特定资源依赖于哪些其它资源。pacemaker共给我们提供了三种资源约束方法:

1)Resource Location(资源位置):定义资源可以、不可以或尽可能在哪些节点上运行

2)Resource Collocation(资源排列):排列约束用以定义集群资源可以或不可以在某个节点上同时运行

3)Resource Order(资源顺序):顺序约束定义集群资源在节点上启动的顺序。

定义约束时,还需要指定分数。各种分数是集群工作方式的重要组成部分。其实,从迁移资源到决定在已降级集群中停止哪些资源的整个过程是通过以某种方式修改分数来实现的。分数按每个资源来计算,资源分数为负的任何节点都无法运行该资源。在计算出资源分数后,集群选择分数最高的节点。INFINITY(无穷大)目前定义为 1,000,000。加减无穷大遵循以下3个基本规则:

1)任何值 + 无穷大 = 无穷大

2)任何值 - 无穷大 = -无穷大

3)无穷大 - 无穷大 = -无穷大

定义资源约束时,也可以指定每个约束的分数。分数表示指派给此资源约束的值。分数较高的约束先应用,分数较低的约束后应用。通过使用不同的分数为既定资源创建更多位置约束,可以指定资源要故障转移至的目标节点的顺序。

我们要定义如下的约束:

[root@node1 ~]# crm

crm(live)# configure

scrm(live)configure# show

crm(live)configure# colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver

crm(live)configure# order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start

crm(live)configure# order myip_after_MysqlFS mandatory: MysqlFS myip

crm(live)configure# order mysqlserver_after_myip  mandatory: myip mysqlserver

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# exit

Bye

6.查看资源和状态

[root@node1 ~]# crm configure show

node node1.zzdx.com

node node2.zzdx.com

primitive MysqlFS ocf:heartbeat:Filesystem \

params device="/dev/drbd0" directory="/mnt/mysqldata" fstype="ext3" \

op start interval="0" timeout="60s" \

op stop interval="0" timeout="60s"

primitive myip ocf:heartbeat:IPaddr \

params ip="192.168.1.6"

primitive mysqldrbd ocf:heartbeat:drbd \

params drbd_resource="mysql" \

op monitor interval="30s" role="Master" \

op monitor interval="31s" role="Slave" \

op start interval="0" timeout="240s" \

op stop interval="0" timeout="100s"

primitive mysqlserver lsb:mysqld

ms MS_mysqldrbd mysqldrbd \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver

order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start

order myip_after_MysqlFS inf: MysqlFS myip

order mysqlserver_after_myip inf: myip mysqlserver

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100"

 

[root@node1 ~]# crm status

============

Last updated: Wed Oct 31 00:44:42 2012

Stack: openais

Current DC: node2.zzdx.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ node1.zzdx.com node2.zzdx.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

 Masters: [ node1.zzdx.com ]

     Slaves: [ node2.zzdx.com ]

MysqlFS (ocf::heartbeat:Filesystem): Started node1.zzdx.com

myip (ocf::heartbeat:IPaddr): Started node1.zzdx.com

mysqlserver (lsb:mysqld): Started node1.zzdx.com

由此可见服务在node1节点上运行正常!!!

 

7.查看服务运行

Node1:

[root@node1 ~]# service mysqld status

MySQL running (8773)                                       [  OK  ]

 

[root@node1 ~]# ifconfig |less

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:08:30:09 

          inet addr:192.168.1.6  Bcast:192.168.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          Interrupt:67 Base address:0x2000

 

[root@node1 ~]# mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

/dev/hdc on /mnt/cdrom type iso9660 (ro)

/dev/drbd0 on /mnt/mysqldata type ext3 (rw)

 

Node2:

[root@node2 ~]# service mysqld status

MySQL is not running                                       [FAILED]

 

[root@node2 ~]# mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

/dev/hdc on /mnt/cdrom type iso9660 (ro)

8.继续测试群集:

在node1上操作,让node1下线:

[root@node1 ~]# crm node standby

[root@node1 ~]# crm status

============

Last updated: Wed Oct 31 00:57:58 2012

Stack: openais

Current DC: node1.zzdx.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Node node1.zzdx.com: standby

Online: [ node2.zzdx.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

    Masters: [ node2.zzdx.com ]

     Stopped: [ mysqldrbd:0 ]

MysqlFS (ocf::heartbeat:Filesystem): Started node2.zzdx.com

myip (ocf::heartbeat:IPaddr): Started node2.zzdx.com

mysqlserver (lsb:mysqld): Started node2.zzdx.com

在node2上查看服务运行:

[root@node2 ~]# service mysqld status

MySQL running (7952)                                       [  OK  ]

[root@node2 ~]# mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

/dev/hdc on /mnt/cdrom type iso9660 (ro)

/dev/drbd0 on /mnt/mysqldata type ext3 (rw)

 

[root@node2 ~]# ifconfig |less

eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:E8:F5:BD 

          inet addr:192.168.1.6  Bcast:192.168.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          Interrupt:67 Base address:0x2000

现在一切正常,我们可以验证mysql服务是否能被正常访问:

首先,在node2上面建立一个用户user1,密码:123456.

我们定义的是通过VIP:192.168.1.6来访问mysql服务,现在node2上建立一个可以让某个网段主机能访问的账户(这个内容会同步drbd设备同步到node1上):

[root@node2 ~]# mysql

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 1

Server version: 5.5.15-log MySQL Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> grant all on *.* to user1@'192.168.%.%' identified by '123456';

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

mysql> quit;

Bye

使用客户机进行测试:

192.168.1.66主机ping VIP :192.168.1.6

wps_clip_image-19278

使用客户机192.168.1.66访问mysql数据库(VIP:192.168.1.6)

首先在客户端上安装mysql工具:

[root@node1 ~]# mkdir /mnt/cdrom/

[root@node1 ~]# mount /dev/cdrom /mnt/cdrom/

[root@node1 ~]# cd /mnt/cdrom/Server/

[root@node1 Server]# vim /etc/yum.repos.d/rhel-debuginfo.repo

  1 [rhel-server]

  2 name=Red Hat Enterprise Linux server

  3 baseurl=file:///mnt/cdrom/Server/

  4 enabled=1

  5 gpgcheck=0

  6 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[root@node1 Server]# yum install mysql  -y

登录mysql

[root@node1 Server]# mysql -u user1 -p -h 192.168.1.6  

wps_clip_image-10485










本文转自 liuyatao666 51CTO博客,原文链接:http://blog.51cto.com/5503845/1048455,如需转载请自行联系原作者
相关实践学习
如何在云端创建MySQL数据库
开始实验后,系统会自动创建一台自建MySQL的 源数据库 ECS 实例和一台 目标数据库 RDS。
全面了解阿里云能为你做什么
阿里云在全球各地部署高效节能的绿色数据中心,利用清洁计算为万物互联的新世界提供源源不断的能源动力,目前开服的区域包括中国(华北、华东、华南、香港)、新加坡、美国(美东、美西)、欧洲、中东、澳大利亚、日本。目前阿里云的产品涵盖弹性计算、数据库、存储与CDN、分析与搜索、云通信、网络、管理与监控、应用服务、互联网中间件、移动服务、视频服务等。通过本课程,来了解阿里云能够为你的业务带来哪些帮助 &nbsp; &nbsp; 相关的阿里云产品:云服务器ECS 云服务器 ECS(Elastic Compute Service)是一种弹性可伸缩的计算服务,助您降低 IT 成本,提升运维效率,使您更专注于核心业务创新。产品详情: https://www.aliyun.com/product/ecs
目录
相关文章
|
4月前
|
存储 关系型数据库 MySQL
《MySQL 简易速速上手小册》第5章:高可用性和灾难恢复(2024 最新版)
《MySQL 简易速速上手小册》第5章:高可用性和灾难恢复(2024 最新版)
60 2
|
4月前
|
关系型数据库 MySQL 数据库
深入MySQL数据库进阶实战:性能优化、高可用性与安全性
深入MySQL数据库进阶实战:性能优化、高可用性与安全性
400 0
|
4月前
|
SQL 存储 运维
MySQL高可用性:主从复制和集群
MySQL高可用性:主从复制和集群
94 0
|
4月前
|
缓存 关系型数据库 MySQL
【专栏】提升MySQL性能和高可用性的策略,包括索引优化、查询优化和事务管理
【4月更文挑战第27天】本文探讨了提升MySQL性能和高可用性的策略,包括索引优化、查询优化和事务管理。通过合理使用B-Tree和哈希索引,避免过度索引,以及优化查询语句和利用查询缓存,可以改善性能。事务管理中,应减小事务大小并及时提交,以保持系统效率。主从或双主复制可增强高可用性。综合运用这些方法,并根据实际需求调整,是优化MySQL的关键。
90 2
|
4月前
|
关系型数据库 MySQL
MySQL高可用性之Keepalived+Mysql(双主热备)
MySQL高可用性之Keepalived+Mysql(双主热备)
|
4月前
|
存储 监控 关系型数据库
MySQL高可用性之MySQL+DRBD+Heartbeat
MySQL高可用性之MySQL+DRBD+Heartbeat
|
4月前
|
SQL 关系型数据库 MySQL
MySQL数据库进阶实战:优化性能、提高安全性和实现高可用性
MySQL数据库进阶实战:优化性能、提高安全性和实现高可用性
149 0
|
9月前
|
SQL 关系型数据库 MySQL
MySQL数据库进阶实战:优化性能、提高安全性和实现高可用性
MySQL数据库进阶实战:优化性能、提高安全性和实现高可用性
69 0
|
负载均衡 监控 关系型数据库
MySQL数据库的主从复制和高可用性架构
MySQL数据库的主从复制和高可用性架构
|
16天前
|
SQL 关系型数据库 MySQL
【揭秘】MySQL binlog日志与GTID:如何让数据库备份恢复变得轻松简单?
【8月更文挑战第22天】MySQL的binlog日志记录数据变更,用于恢复、复制和点恢复;GTID为每笔事务分配唯一ID,简化复制和恢复流程。开启binlog和GTID后,可通过`mysqldump`进行逻辑备份,包含binlog位置信息,或用`xtrabackup`做物理备份。恢复时,使用`mysql`命令执行备份文件,或通过`innobackupex`恢复物理备份。GTID模式下的主从复制配置更简便。
69 2

热门文章

最新文章

下一篇
DDNS