如何配置一个负载均衡(Loadbalanced)的高可用性(High-Availability,HA)Apache集群(Cluster)

本文涉及的产品
云数据库 RDS MySQL Serverless,0.5-2RCU 50GB
简介:
    This tutorial shows how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using  heartbeat , and if one load balancer fails, the other takes over silently.
本教程说明了如何配置一个提供高可用性( High-Availability, 简称HA ) 的双节点(two-node)Apache网络服务器(web sever,例如:JBoss)集群。在我们创造一个负载均衡的Apache集群前,两个Apache节点(例如:JBoss)之间是分开引入请求的。因为我们不想负载均衡器变成另一个“单点故障”,所以我们也必须为负载均衡器提供高可用性。所以我们的负载均衡器事实上将由两个负载均衡器节点组成,它们彼此利用心跳(heartbeat)监控对方,假如一个负载均衡器失效,那么另一个负载均衡器就会接管服务。
 
The advantage of using a load balancer compared to using round robin DNS is that it takes care of the load on the web server nodes and tries to direct requests to the node with less load, and it also takes care of connections/sessions. Many web applications (e.g. forum software, shopping carts, etc.) make use of sessions, and if you are in a session on Apache node 1, you would lose that session if suddenly node 2 served your requests. In addition to that, if one of the Apache nodes goes down, the load balancer realizes that and directs all incoming requests to the remaining node which would not be possible with round robin DNS.
用负载均衡器比用DNS循环的优势在于它会照顾在Web服务器(例如:JBoss)节点上的负载,并设法把请求转到负载较小的节点上,并且它还照顾连接/会话(connections/sessions)。很多Web应用程序(例如:软件论坛,购物车等)使用会话,如果你只在Apache节点(例如:JBoss)一上创建会话,那么当节点二突然为你的请求提供服务的时候你将丢失这个会话。除此之外,如果Apache节点(例如:JBoss节点)中的任何一个节点down掉了(如:关机,停止服务等),负载均衡器认识到此问题后,那它就会指挥所有传来的请求到其余正常工作的节点上,这未必在可能存在的方面先进DNS循环。
 
For this setup, we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.
这种集群结构,我们需要有四个节点(两个Apache节点和两个负载均衡器节点)和五个IP地址:每一个节点和一个虚拟IP地址将分担由负载均衡器节点传入的HTTP请求。
 
I will use the following setup here:
·       Apache node 1:  webserver1.example.com  ( webserver1 ) - IP address:  192.168.0.101 ; Apache document root:  /var/www
·       Apache node 2:  webserver2.example.com  ( webserver2 ) - IP address:  192.168.0.102 ; Apache document root:  /var/www
·       Load Balancer node 1:  loadb1.example.com  ( loadb1 ) - IP address:  192.168.0.103
·       Load Balancer node 2:  loadb2.example.com  ( loadb2 ) - IP address:  192.168.0.104
·       Virtual IP Address:  192.168.0.105  (used for incoming requests)
这里我将用以下设置:
l    Apache 节点一(例如:装有JBoss的服务器) webserver1.example.com  ( webserver1 )-IP 地址:  
192.168.0.101 ;Apache 文档根路径: /var/www
l    Apache 节点二: webserver2.example.com  ( webserver2 )-IP 地址:
192.168.0.102 ;Apache 文档根路径: /var/www
l    负载均衡器节点一: loadb1.example.com  ( loadb1 ) – IP 地址 192.168.0.103
l    负载均衡器节点二: loadb2.example.com  ( loadb2 ) – IP 地址 192.168.0.104
l    虚拟IP地址: 192.168.0.105  ( 用于外部请求的IP地址)
 
Have a look at the drawing on [url]http://www.linuxvirtualserver.org/docs/ha/ultramonkey.html[/url] to understand how this setup looks like.
      我们可以到[url]http://www.linuxvirtualserver.org/docs/ha/ultramonkey.html[/url]上看看,学习一下像这样的设置怎么做。
 
In this tutorial I will use Debian Sarge for all four nodes. I assume that you have installed a basic Debian installation on all four nodes, and that you have installed Apache on  webserver1  and  webserver2 , with  /var/www  being the document root of the main web site.
在这个指南里我将使用Debian Sarge适用于所有的四个节点。我假设你已经在所有的四个节点上安装了一个基础的Debian安装,而且你已经在webserver1webserver2上安装了Apache(例如:JBoss),以 /var/www 的名字 创建了web站点主文档目录。
 
I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
首先我想说的是,这不是配置这样一个系统的唯一方法。有许多方法可以达到这种目标,但是我接受这种方法。我不做任何保证这种方式会为你工作!
1 Enable IPVS On The Load Balancers                           
在负载均衡器上启用IPVS
First we must enable IPVS on our load balancers. IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.
首先我们必须启用IPVS在我们的负载均衡器上。IPVS(IP虚拟服务器)Linux Kernel内部实现传输层(transport-layer)负载均衡,所以叫四层交换(Layer-4 switching)
 
loadb1/loadb2:
echo ip_vs_dh >> /etc/modules
echo ip_vs_ftp >> /etc/modules
echo ip_vs >> /etc/modules
echo ip_vs_lblc >> /etc/modules
echo ip_vs_lblcr >> /etc/modules
echo ip_vs_lc >> /etc/modules
echo ip_vs_nq >> /etc/modules
echo ip_vs_rr >> /etc/modules
echo ip_vs_sed >> /etc/modules
echo ip_vs_sh >> /etc/modules
echo ip_vs_wlc >> /etc/modules
echo ip_vs_wrr >> /etc/modules
Then we do this:
然后我们这样做:
loadb1/loadb2:
modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
If you get errors, then most probably your kernel wasn't compiled with IPVS support, and you need to compile a new kernel with IPVS support (or install a kernel image with IPVS support) now.
      如果你得到错误,那么很有可能是你的kernel编译时没有IPVS支持,那么你需要马上编译一个新的kernel支持IPVS(或者在安装一个支持IPVSkernel镜像)
2 Install Ultra Monkey On The Load  alancers                 
在负载均衡器上安装Ultra Monkey
Ultra Monkey is a project to create load balanced and highly available services on a local area network using Open Source components on the Linux operating system; the Ultra Monkey package provides  heartbeat  (used by the two load balancers to monitor each other and check if the other node is still alive) and ldirectord, the actual load balancer.
Ultra Monkey 是一个方案,创建在一个局域网使用的对于Linux操作系统的负载均衡和高可用性服务的开放源码组件;Ultra Monkey包提供心跳(heartbeat ,用于两个负载均衡器监视和检查对方,如果另一个节点是活着的)和指挥器( ldirectord ) ,实现负载均衡。
 
To install Ultra Monkey, we must edit  /etc/apt/sources.list  now and add these two lines (don't remove the other repositories):
安装Ultra Monkey,我们必须马上编辑  /etc/apt/sources.list  文件,增加两条线路(不要删除其他的 repositories )
 
loadb1/loadb2:
vi /etc/apt/sources.list
 
Afterwards we do this:
然后我们这样做:
 
loadb1/loadb2:
apt-get update
and install Ultra Monkey:
接着安装Ultra Monkey
 
loadb1/loadb2:
apt-get install ultramonkey
If you see this warning:
如果你看见这样的警告:
 libsensors3 not functional                                   
                                                        
 It appears that your kernel is not compiled with sensors support. As a
 result, libsensors3 will not be functional on your system.          
                                                        
 If you want to enable it, have a look at "I2C Hardware Sensors Chip  
 support" in your kernel configuration.                        
 
you can ignore it.
你可以忽略它。
During the Ultra Monkey installation you will be asked a few question. Answer as follows:
在安装Ultra Monkey 的时候会问你一些问题。回答如下:
  Do you want to automatically load IPVS rules on boot?
<-- No
Select a daemon method.
<-- none
3 Enable Packet Forwarding On The Load  Balancers 
在负载均衡器上激活包转发
The load balancers must be able to route traffic to the Apache nodes. Therefore we must enable packet forwarding on the load balancers. Add the following lines to /etc/sysctl.conf:
负载均衡器必须能够在通信线路上与Apache节点通信。因此我们必须在负载均衡器上激活包转发。在 /etc/sysctl.conf 文件中增加如下项:
loadb1/loadb2:
vi /etc/sysctl.conf
Then do this:
然后这样做:
loadb1/loadb2:
sysctl -p
4 Configure heartbeat And ldirectord  配置心跳和指挥器
Now we have to create three configuration files for  heartbeat . They must be identical on  loadb1  and  loadb2 !
现在我们不得不为心跳创建三个配置文件。它们在 loadb1  and  loadb2 上必须完全一样。
loadb1/loadb2:
vi /etc/ha.d/ha.cf
logfacility        local0
bcast        eth0                # Linux
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node        loadb1
node        loadb2
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
Important:  As nodenames we must use the output of
重要的是:节点名字,我们必须用它来输出。
uname -n
loadb1/loadb2:
vi /etc/ha.d/haresources
loadb1        \
        ldirectord::ldirectord.cf \
        LVSSyncDaemonSwap::master \
        IPaddr2::192.168.0.105/24/eth0/192.168.0.255
The first word is the output of
第一个单词是输出
uname -n
on loadb1, no matter if you create the file on loadb1 or loadb2! After IPaddr2 we put our virtual IP address 192.168.0.105.
对于loadb1,在loadb1loadb2上不论你是否船舰这个文件!在IPadr2后面我们放置我们的虚拟IP地址192.168.0.105
loadb1/loadb2:
vi /etc/ha.d/authkeys
auth 3
3 md5 somerandomstring
somerandomstring  is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
somerandomstring   是一个密码,在loadb1loadb2上的两个心跳守护线程用它来鉴别对方。在这使用你们自己的字符串。你可以在三个鉴定机制中方式中选择。我使用MD5,因为它是最安全的。
/etc/ha.d/authkeys  should be readable by root only, therefore we do this:
这个文件应该最好只有root用户能操作,因此我们要这样做:
loadb1/loadb2:
chmod 600 /etc/ha.d/authkeys
ldirectord  is the actual load balancer. We are going to configure our two load balancers ( loadb1.example.com  and  loadb2.example.com ) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. To make it work, we must create the  ldirectord  configuration file  /etc/ha.d/ldirectord.cf  which again must be identical on  loadb1  and  loadb2 .
指挥器是实际的负载均衡者。我们将要在主动/被动(active/passive)设置中配置我们的两个负载均衡器( loadb1.example.com  and  loadb2.example.com ), 这意味着有一个活跃(起作用)的负载均衡者,另一个是热备份。如果这个活跃的均衡者这失效了,那么另一个热备份的均衡应该变成活跃的。达到这样一个工作目的,我们必须创建指挥器配置文件  /etc/ha.d/ldirectord.cf  ,这次在 loadb1   和  loadb2 上也必须保持相同。
loadb1/loadb2:
vi /etc/ha.d/ldirectord.cf
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes

virtual=192.168.0.105:80
        real=192.168.0.101:80 gate
        real=192.168.0.102:80 gate
        fallback=127.0.0.1:80 gate
        service=http
        request="ldirector.html"
        receive="Test Page"
        scheduler=rr
        protocol=tcp
        checktype=negotiate
In the  virtual=  line we put our virtual IP address ( 192.168.0.105 in this example), and in the real=  lines we list the IP addresses of our Apache nodes ( 192.168.0.101  and  192.168.0.102 in this example). In the request=  line we list the name of a file on  webserver1  and  webserver2  that  ldirectord  will request repeatedly to see if  webserver1  and  webserver2  are still alive. That file (that we are going to create later on) must contain the string listed in the  receive=  line.
在这个 virtual= 线上,我们放置我们的虚拟IP地址(这个例子是192.168.0.105),然后在 real= 线上我们列出我们的Apache节点(这个例子上是:192.168.0.101  192.168.0.102)IP地址。在这个 request= 我们列出在webserver1和威尔伯server上一个文件的名字,指挥器将重复请求去看webserver1webserver2是否都是活着的。这个文件(我们将在稍后创建)必须包含在 receive= 线上列出的字符串。
Afterwards we create the system startup links for heartbeat and remove those of ldirectord because ldirectord will be started by the heartbeat daemon:
然后我们创建这个系统对于心跳的启动环节,移除那些指挥器,因为指挥器将被心跳守护线程启动:
loadb1/loadb2:
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
Finally we start heartbeat (and with it ldirectord):
最后我们启动心跳(和对于它的指挥器)
loadb1/loadb2:
/etc/init.d/ldirectord stop
/etc/init.d/heartbeat start
5 Test The Load Balancers   试负载均衡器
Let's check if both load balancers work as expected:
让我们来检测两个负载均衡者是否会像预期的那样工作:
loadb1/loadb2:
ip addr sh eth0
The active load balancer should list the virtual IP address (192.168.0.105):
活跃的负载均衡者将列出虚拟IP地址是(192.168.0.105)
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:16:3e:40:18:e5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0
    inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0
The hot-standby should show this: 
热备份的负载均衡者将显示:
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:16:3e:50:e3:3a brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0
loadb1/loadb2:
ldirectord ldirectord.cf status
Output on the active load balancer: 
在活动的负载均衡器上输出:
Output on the hot-standby:
在热备份上输出:
loadb1/loadb2:
ipvsadm -L -n
Output on the active load balancer:
在活跃的负载均衡器上输出:
Output on the hot-standby: 
在热备份上输出:
loadb1/loadb2:
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
Output on the active load balancer: 
在活动的负载均衡器上输出:
Output on the hot-standby: 
在热备份上输出:
If your tests went fine, you can now go on and configure the two Apache nodes.
如果你的测试是美好的,你现在可以继续配置两架Apache节点。
6 Configure The Two Apache Nodes   配置两个Apache节点
Finally we must configure our Apache cluster nodes webserver1.example.com and webserver2.example.com to accept requests on the virtual IP address 192.168.0.105.
最后我们必须配置我们的Apache集群节点webserver1.example.com and webserver2.example.com 接受在虚拟IP地址192.168.0.105的请求。
webserver1/webserver2:
apt-get install iproute
Add the following to /etc/sysctl.conf: 
/etc/sysctl.conf 文件中加入如下内容:
webserver1/webserver2:
vi /etc/sysctl.conf
Then run this:
然后运行:
webserver1/webserver2:
sysctl -p
Add this section for the virtual IP address to /etc/network/interfaces: 
/etc/network/interfaces 文件中增加虚拟IP地址的部分:
webserver1/webserver2:
vi /etc/network/interfaces
Then run this:
然后运行:
webserver1/webserver2:
ifup lo:0
Finally we must create the file ldirector.html. This file is requested by the two load balancer nodes repeatedly so that they can see if the two Apache nodes are still running. I assume that the document root of the main apache web site on webserver1 and webserver2 is /var/www, therefore we create the file /var/www/ldirector.html:
最后我们必须创建这个 ldirector.html 文件这个文件被两个负载平衡器节点重复的请求,如果这两个Apache节点都是运行的,所以它们能看见。我架设在webserver1  webserver2上的Apahce web站点的文档主目录是/var/www,因此我们要在创建/var/www/ldirector.html文件:
webserver1/webserver2:
vi /var/www/ldirector.html
7 Further Testing   进一步测试
You can now access the web site that is hosted by the two Apache nodes by typing  [url]http://192.168.[/url]0.105 in your browser.
现在你可以通过由两台Apache节点组成的主机访问web站点,在浏览器的地址栏中键入[url]http://192.168.0.105[/url]
Now stop the Apache on either webserver1 or webserver2. You should then still see the web site on [url]http://192.168.0.105[/url] because the load balancer directs requests to the working Apache node. Of course, if you stop both Apaches, then your request will fail.
现在停止任何一台webserverwebserver1或者webserver2。你应该仍然可以访问web站点[url]http://192.168.0.105[/url],因为负载均衡器指挥请求到工作着的Apache节点上。当然,如果你停止两台Apache,那么你的请求将会失败。
Now let's assume that  loadb1  is our active load balancer, and  loadb2  is the hot-standby. Now stop  heartbeat  on  loadb1 :
现在让我们架设这个loadb1是我们活跃的负载均衡器,loadb2是热备份的。现在在loadb1上停止心跳:
loadb1:
/etc/init.d/heartbeat stop
Wait a few seconds, and then try  [url]http://192.168.0.105[/url]  again in your browser. You should still see your web site because  loadb2  has taken the active role now.
等待几秒钟,然后尝试在浏览器的地址栏里输入[url]http://192.168.0.105[/url]再次请求。你应该仍然看见你的web站点,因为loadb2已经从备份状态变成活跃的。
Now start heartbeat again on loadb1:
现在再一次在loadb1上启动心跳:
loadb1:
/etc/init.d/heartbeat start
loadb2  should still have the active role. Do the tests from chapter 5 again on loadb1 and loadb2, and you should see the inverse results as before.
Loadb2 仍将保持活跃状态的角色。在loadb1loadb2上再一次做测试,从第五章的步骤开始,你也应该看到这个翻转的结果和前面的一样。
If you have also passed these tests, then your loadbalanced Apache cluster is working as expected. Have fun!
如果你也通过了这些测试,那么你的负载均衡Apache集群的工作是预期的。正确完工!
8 Further Reading   进一步阅读
This tutorial shows how to loadbalance two Apache nodes. It does not show how to keep the files in the Apache document root in sync or how to create a storage solution like an NFS server that both Apache nodes can use, nor does it provide a solution how to manage your MySQL database(s). You can find solutions for these issues here:
本指南演示了如何配置两个Apache节点的负载均衡。它并没有说明如何在apahce的根目录保持文件同步,或者如何创建一个像NFS服务器一样的存储方案供给两个Apahce节点使用,也没有提供一个怎样管理你的MySQL数据库的解决方案。你可以找到这些问题的解决方案在如下地址:
9 Links   链接
·       heartbeat / The High-Availability Linux Project: [url]http://linux-ha.org[/url]
心跳/高可用性 Linux工程:[url]http://linux-ha.org[/url]
·       The Linux Virtual Server Project: [url]http://www.linuxvirtualserver.org[/url]
Linux 虚拟服务器项目:[url]http://www.linuxvirtualserver.org[/url]
·       Ultra Monkey: [url]http://www.ultramonkey.org[/url]

本文转自xudayu 51CTO博客,原文链接:http://blog.51cto.com/xudayu/64741,如需转载请自行联系原作者

相关实践学习
部署高可用架构
本场景主要介绍如何使用云服务器ECS、负载均衡SLB、云数据库RDS和数据传输服务产品来部署多可用区高可用架构。
负载均衡入门与产品使用指南
负载均衡(Server Load Balancer)是对多台云服务器进行流量分发的负载均衡服务,可以通过流量分发扩展应用系统对外的服务能力,通过消除单点故障提升应用系统的可用性。 本课程主要介绍负载均衡的相关技术以及阿里云负载均衡产品的使用方法。
相关文章
|
1月前
|
弹性计算 负载均衡 容灾
slb配置后端服务器组
配置阿里云SLB后端服务器组涉及四个主要步骤:创建服务器组、添加ECS实例、关联监听规则和设定负载均衡策略。这使得流量根据业务需求和服务器特性进行转发,便于应用架构的灵活管理和扩展,支持蓝绿部署、灰度发布,并通过多可用区提升系统可用性和容灾能力。
26 3
|
1月前
|
SQL Apache HIVE
一文彻底掌握Apache Hudi的主键和分区配置
一文彻底掌握Apache Hudi的主键和分区配置
61 0
|
4月前
|
负载均衡 网络协议 网络架构
VRRP负载均衡模式配置实用吗?
VRRP负载均衡模式配置实用吗?
67 0
|
2月前
|
Java 程序员 API
Springboot-swagger配置(idea社区版2023.1.4+apache-maven-3.9.3-bin)
Springboot-swagger配置(idea社区版2023.1.4+apache-maven-3.9.3-bin)
59 1
|
2月前
|
前端开发 Java 数据库连接
Springboot-MyBatis配置-配置端口号与服务路径(idea社区版2023.1.4+apache-maven-3.9.3-bin)
Springboot-MyBatis配置-配置端口号与服务路径(idea社区版2023.1.4+apache-maven-3.9.3-bin)
33 0
|
1月前
|
安全 Linux Apache
Apache代理服务器搭建和配置
Apache代理服务器搭建和配置
|
3月前
|
消息中间件 Kafka Linux
Apache Kafka-初体验Kafka(03)-Centos7下搭建kafka集群
Apache Kafka-初体验Kafka(03)-Centos7下搭建kafka集群
64 0
|
6天前
|
存储 Java 网络安全
ZooKeeper【搭建 03】apache-zookeeper-3.6.0 伪集群版(一台服务器实现三个节点的ZooKeeper集群)
【4月更文挑战第10天】ZooKeeper【搭建 03】apache-zookeeper-3.6.0 伪集群版(一台服务器实现三个节点的ZooKeeper集群)
12 1
|
1月前
|
存储 缓存 负载均衡
【Apache ShenYu源码】如何实现负载均衡模块设计
整个模块为ShenYu提供了什么功能。我们可以看下上文我们提到的工厂对象。/***/核心方法很清晰,我们传入Upsteam列表,通过这个模块的负载均衡算法,负载均衡地返回其中一个对象。这也就是这个模块提供的功能。
18 1
|
1月前
|
弹性计算 缓存 网络协议
slb配置监听规则
配置Server Load Balancer的监听规则涉及选择协议(如HTTP/HTTPS/TCP/UDP)、设置端口,配置后端服务器组,设定健康检查(TCP或HTTP),定义转发规则(轮询、权重等),配置SSL证书、会话保持及安全优化措施。在阿里云上,这可通过登录控制台,选择SLB实例,添加监听并设置相关参数来完成。不同云服务商的具体步骤可能略有差异,参考官方文档为宜。
35 3

推荐镜像

更多