V 7 keepalived+[nginx,haproxy]

简介:

环境:

[root@node1 ~]# uname -a

Linux node1.magedu.com2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64GNU/Linux

准备:

VIP192.168.41.222

node1nginx|haproxy’s master192.168.41.133,安装nginx|haproxykeepalived

node2nginx|haproxy’s backup192.168.41.134,安装nginx|haproxykeepalived

node3(后端RS1192.168.41.135,安装httpd

node4(后端RS2192.168.41.136,安装httpd

注:node{1,2}要高可用,配置双机互信,时间同步

 

1keepalived+nginxnginx的安装参考《第三阶段(十五)理解LNMP》):

node{1,2}-side

[root@node1 ~]# yum -y groupinstall “Desktop Platform” “Desktop Platform Development” "Server Platform Development" “Development tools” “Compatibility libraries”(将这几个开发平台和兼容库安装上,防止编译时依赖某个库文件还要单独安装)

[root@node1 ~]# tar xf keepalived-1.2.19.tar.gz

[root@node1 ~]# cd keepalived-1.2.19

[root@node1 keepalived-1.2.19]# ./configure --help

[root@node1 keepalived-1.2.19]# ./configure --prefix=/usr/local/keepalived

[root@node1 keepalived-1.2.19]# make && make install

[root@node1 keepalived-1.2.19]# cd /usr/local/keepalived/

[root@node1 keepalived]# ls

bin etc  sbin  share

[root@node1 keepalived]# cp bin/genhash /bin

[root@node1 keepalived]# cp etc/rc.d/init.d/keepalived /etc/rc.d/init.d/

[root@node1 keepalived]# cp etc/sysconfig/keepalived /etc/sysconfig/

[root@node1 keepalived]# mkdir /etc/keepalived

[root@node1 keepalived]# cp -r etc/keepalived/* /etc/keepalived/

[root@node1 keepalived]# cp sbin/keepalived /sbin/

[root@node1 keepalived]# cd

[root@node1 ~]# vim /etc/man.config(添加一行,这样可以直接使用#man  keepalived.conf,否则要指定路径#man  -M /usr/local/keepalived/share/man keepalived.conf

MANPATH /usr/local/keepalived/share/man

[root@node1 ~]# . !$

注:编译安装时可使用#./configure --sysconfidr=/etc --bindir=/bin --sbindir=/sbin--mandir=/usr --prefix=/usr/local/keepalived这样在安装完成后省去以上复制的几步

[root@node1 ~]# chkconfig --add keepalived

[root@node1 ~]# chkconfig keepalived on

[root@node1 ~]# chkconfig --list keepalived

keepalived           0:off 1:off 2:on 3:on 4:on 5:on 6:off

 

node1-side

[root@node1 ~]# cd /etc/keepalived/

[root@node1 keepalived]# cp keepalived.conf keepalived.conf.bak

[root@node1 keepalived]# vim keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

       root@localhost

   }

  notification_email_from keepalived@localhost

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_instance VI_1 {

    state MASTER

   interface eth0

virtual_router_id 51

    mcast_src_ip 192.168.41.133

    priority 100

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.41.222/32 dev eth0 label eth0:0

    }

}

[root@node1 keepalived]# scp keepalived.conf node2:/etc/keepalived/

keepalived.conf                   100%  485    0.5KB/s   00:00 

 

node2-side

[root@node2 keepalived]# vim keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

       root@localhost

   }

  notification_email_from keepalived@localhost

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_instance VI_1 {

    state BACKUP

   interface eth0

virtual_router_id51

    mcast_src_ip 192.168.41.134

    priority 99

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.41.222/32 dev eth0 label eth0:0

    }

}

注:mcast_src_ip  <IPADDR>(绑定当前node地址,多播方式。default IP for binding vrrpd is the primary IP on interface. If you want to hide location of vrrpd, use this IP as src_addr for multicast or unicast vrrp packets. (since it’s multicast, vrrpd will get the reply packet no matter what src_addr is used).

 

node{1,2}-side

[root@node1 ~]# vim /etc/nginx/nginx.conf

http {

   include       mime.types;

default_type  application/octet-stream;

   sendfile        on;

   keepalive_timeout  65;

    upstream websrvs {

        server 192.168.41.135 weight=1 max_fails=2 fail_timeout=2;

        server 192.168.41.136 weight=1 max_fails=2 fail_timeout=2;

        server 127.0.0.1:8080 backup;

}

   server {

       listen       80;

       server_name  localhost;

       location / {

                proxy_pass http://websrvs;

                proxy_set_header X-Real-IP $remote_addr;

       }

       error_page   500 502 503 504  /50x.html;

       location = /50x.html {

           root   html;

       }

}

   server {

        listen 8080;

       server_name localhost;

       location / {

                root /web/errorpages;

                index index.html;

       }

}

}

[root@node1 ~]# cat /web/errorpages/index.html

Sorry,the server is maintaining

 

node{3,4}-side

[root@node3 ~]# service httpd status

httpd (pid 2332) 正在运行...

[root@node4 ~]# service httpd status

httpd (pid 12734) 正在运行...

 

node-side

[root@node1 ~]# elinks -dump http://192.168.41.135

  RS1.magedu.com

[root@node1 ~]# elinks -dump http://192.168.41.136

  RS2.magedu.com

[root@node1 ~]# service nginx start

正在启动 nginx                                           [确定]

[root@node1 ~]# service keepalived start

正在启动 keepalived                                     [确定]

 

node2-side

[root@node2 ~]# elinks -dump http://192.168.41.136

  RS2.magedu.com

[root@node2 ~]# elinks -dump http://192.168.41.135

  RS1.magedu.com

[root@node2 ~]# service nginx start

正在启动 nginx                                           [确定]

[root@node2 ~]# service keepalived start

正在启动 keepalived                                     [确定]

 

测试:

wKiom1aPvlDAZGCBAABHG7d8c8k749.jpg

再次刷新后

wKioL1aPvorDHuHnAABGXIls-fU698.jpg

[root@node1 ~]# ifconfig eth0:0(由此可知当前是node1是活动状态

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:E2:18:0E 

         inet addr:192.168.41.222 Bcast:0.0.0.0  Mask:255.255.255.255

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

[root@node2 ~]# ifconfig eth0:0

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:CC:D9:CD 

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

[root@node3 ~]# service httpd stop

停止httpd                                              [确定]

[root@node4 ~]# service httpd stop

停止 httpd                                              [确定]

再次刷新页面

wKioL1aPvpjha9IaAABK35XDcME516.jpg

[root@node3 ~]# service httpd start

正在启动 httpd                                           [确定]

[root@node4 ~]# service httpd start

正在启动 httpd                                           [确定]

[root@node1 ~]# service keepalived stop

停止 keepalived                                          [确定]

再次访问网页正常

[root@node1 ~]# ifconfig eth0:0

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:E2:18:0E 

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

[root@node2 ~]# ifconfig eth0:0(将node1上的keepalived停掉,切换至node2,可正常提供服务)

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:CC:D9:CD 

         inet addr:192.168.41.222  Bcast:0.0.0.0 Mask:255.255.255.255

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

以上仅实现当node1node2主机故障(或网络故障,或keepalived服务故障)时,可自动切换,但在nginx服务故障时并不能检测到,由此进一步配置

[root@node1 ~]# service keepalived start

正在启动 keepalived                                     [确定]

 

node{1,2}-side

[root@node1 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

       root@localhost

   }

  notification_email_from keepalived@localhost

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_script nginx_check {

    script "[[ `ps -C nginx --no-header` -eq 0 ]] && exit 1 || exit 0"

    interval 1

    weight -5

    fall 2

    rise 1

}

vrrp_instance VI_1 {

   state MASTER

   interface eth0

   virtual_router_id 51

   mcast_src_ip 192.168.41.133

   priority 100

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.41.222/32 dev eth0 label eth0:0

    }

    track_script {

        nginx_check

    }

}

这样能实现在node1master)上的nginx服务down掉或者故障后会自动转移到node2backup

 

2keepalived+haproxy

node{1,2}-side

[root@node1 ~]# vim /etc/haproxy/haproxy.cfg

global

   log         127.0.0.1 local2

    chroot     /var/lib/haproxy

   pidfile     /var/run/haproxy.pid

   maxconn     4000

   user        haproxy

   group       haproxy

   daemon

   stats socket /var/lib/haproxy/stats

defaults

   mode                    http

   log                     global

   option                  httplog

   option                 dontlognull

   option http-server-close

   option forwardfor       except127.0.0.0/8

   option                  redispatch

   retries                 3

   timeout http-request    10s

   timeout queue           1m

   timeout connect         10s

   timeout client          1m

   timeout server          1m

   timeout http-keep-alive 10s

   timeout check           10s

   maxconn                 3000

listen stats

   mode http

   bind 0.0.0.0:1080

   stats enable

   stats hide-version

   stats uri     /haproxyadmin?stats

   stats realm   Haproxy\ Statistics

   stats auth    admin:admin

   stats admin if TRUE

frontend http-in

   bind *:80

   mode http

   log global

   option httpclose

   option logasap

   option dontlognull

   capture request  header Host len20

   capture request  header Refererlen 60

   default_backend servers

backend servers

         balance roundrobin

   server websrv1 192.168.41.135:80 check maxconn 2000

   server websrv2 192.168.41.136:80 check maxconn 2000

 

node1-side

[root@node1 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

         root@localhost

   }

   notification_email_from keepalived@localhost

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_script haproxy_check {

         script "/etc/keepalived/haproxy_check.sh"

         interval 1

         weight 5

}

vrrp_instance VI_1 {

    state MASTER

   interface eth0

   virtual_router_id 51

    mcast_src_ip 192.168.41.133

    priority 100

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

         192.168.41.222/32 dev eth0 label eth0:0

    }

         track_script{

                   haproxy_check

         }

}

 

node2-side

[root@node2 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

         root@localhost

   }

  notification_email_from keepalived@localhost

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

vrrp_script haproxy_check {

         script "/etc/keepalived/haproxy_check.sh"

         interval 1

         weight 5

}

vrrp_instance VI_1 {

    state BACKUP

   interface eth0

   virtual_router_id 51

    mcast_src_ip 192.168.41.134

    priority 99

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

         192.168.41.222/32 dev eth0 label eth0:0

    }

         track_script{

                   haproxy_check

         }

}

 

node{1,2}-side

[root@node1 ~]# vim /etc/keepalived/haproxy_check.sh

#!/bin/bash

#

if [ `ps -C haproxy --no-header | wc -l` -eq 0 ] ; then

       service haproxy start

fi

if [ `ps -C haproxy --no-header | wc -l` -eq 0 ] ; then

       service keepalived stop

fi

[root@node1 ~]# chmod +x !$

[root@node1 ~]# scp /etc/keepalived/haproxy_check.sh node2:/etc/keepalived/

 

[root@node1 ~]# service haproxy start

正在启动 haproxy                                         [确定]

[root@node1 ~]# service keepalived start

正在启动 keepalived                                      [确定]

[root@node2 ~]# service haproxy start

正在启动 haproxy                                         [确定]

[root@node2 ~]# service keepalived start

正在启动 keepalived                                     [确定]

 

wKiom1aPvofDD9h9AABGoo4jwCs520.jpg

wKioL1aPvsKgSgnCAABElXdoGOY642.jpg

wKiom1aPvqXTb1hWAACP4Rj-wko807.jpg

[root@node1 ~]# ifconfig eth0:0(当前活动节点在node1上)

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:E2:18:0E 

         inet addr:192.168.41.222 Bcast:0.0.0.0 Mask:255.255.255.255

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

[root@node1 ~]# service haproxy stop(停止haproxy服务后,依照haproxy_check.sh脚本会再次启动它,如果仍无法启动时将会把当前活动nodekeepalived服务停掉,停掉后会自动切换出去)

停止 haproxy                                             [确定]

[root@node1 ~]# service haproxy status

haproxy (pid  43028) 正在运行...

 

[root@node1 ~]# service keepalived stop(模拟keepalived服务停止后,切换至node2

停止 keepalived                                          [确定]

[root@node1 ~]# ifconfig eth0:0

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:E2:18:0E 

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

[root@node2 keepalived]# cd

[root@node2 ~]# ifconfig eth0:0

eth0:0   Link encap:Ethernet  HWaddr00:0C:29:CC:D9:CD 

         inet addr:192.168.41.222 Bcast:0.0.0.0 Mask:255.255.255.255

         UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1



本文转自 chaijowin 51CTO博客,原文链接:http://blog.51cto.com/jowin/1733102,如需转载请自行联系原作者

相关文章
|
7月前
|
运维 应用服务中间件 Linux
keepalived详解(三)——keepalived与Nginx配合实战
keepalived详解(三)——keepalived与Nginx配合实战
240 1
|
20天前
|
负载均衡 前端开发 应用服务中间件
负载均衡指南:Nginx与HAProxy的配置与优化
负载均衡指南:Nginx与HAProxy的配置与优化
39 3
|
4月前
|
Java 应用服务中间件 Shell
Nginx+Keepalived+Tomcat 实现Web高可用集群
Nginx+Keepalived+Tomcat 实现Web高可用集群
138 0
|
4月前
|
负载均衡 网络协议 关系型数据库
一口把LVS、Nginx及HAProxy工作原理讲清楚了。(附图)
一口把LVS、Nginx及HAProxy工作原理讲清楚了。(附图)
|
4月前
|
负载均衡 监控 应用服务中间件
在Linux中,lvs/nginx/haproxy 优缺点?
在Linux中,lvs/nginx/haproxy 优缺点?
|
4月前
|
运维 负载均衡 监控
Nginx加Keepalived实现高可用
使用Nginx和Keepalived来实现高可用性的方案,对于确保关键服务的稳定性和可靠性来说是非常有效的。此配置涉及多个步骤,包括各个服务的安装、设置及测试,目标是在主服务器故障时能无缝切换,以确保服务的持续可用。正确的配置和充分的测试是实现高可用性的保证,这也要求管理员对这些工具和它们背后的原理有深入的了解。
85 1
|
4月前
|
负载均衡 应用服务中间件 Linux
在Linux中,LVS、Nginx、HAproxy有什么区别?工作中怎么选择?
在Linux中,LVS、Nginx、HAproxy有什么区别?工作中怎么选择?
|
4月前
|
负载均衡 网络协议 应用服务中间件
HAProxy 与 NGINX:全面比较
【8月更文挑战第21天】
492 0
HAProxy 与 NGINX:全面比较
|
7月前
|
Kubernetes 搜索推荐 应用服务中间件
通过keepalived+nginx实现 k8s apiserver节点高可用
通过keepalived+nginx实现 k8s apiserver节点高可用
456 17
|
7月前
|
负载均衡 算法 网络协议
LVS、Nginx和HAProxy负载均衡器对比总结
LVS、Nginx和HAProxy负载均衡器对比总结