一、Nginx+Keepalived主主架构
二、主机地址分配
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
dns server :192.168.1.x 255.255.255.0 192.168.1.1
client : 192.168.1.x 255.255.255.0 192.168.1.1
nginx-node1
eth1 : 192.168.1.205 255.255.255.0 192.168.1.1
eth2 : 10.0.0.10 255.0.0.0
nginx-node2
eth1 : 192.168.1.206 255.255.255.0 192.168.1.1
eth2 : 10.0.0.11 255.0.0.0
php-fpm node1 : 10.0.0.22 255.0.0.0
php-fpm node2 : 10.0.0.23 255.0.0.0
php-fpm node3 : 10.0.0.24 255.0.0.0
memcached server:10.0.0.25 255.0.0.0
|
三、Nginx+Keepalived的架构方案
1、主备配置
URL:http://467754239.blog.51cto.com/4878013/1541421
这种方案,使用一个vip地址,前端使用2台机器,一台做主,一台做备,但同时只有一台机器工作,另一台备份机器在主机器不出现故障的时候,永远处于浪费状态。
2、双主配置
URL:http://467754239.blog.51cto.com/4878013/1604497
这种方案,使用两个vip地址,前端使用2台机器,互为主备,同时有两台机器工作,当其中一台机器出现故障,两台机器的请求转移到一台机器负担,非常适合于当前架构环境,故本次采用此方案对网站进行高可用架构。
四、编译安装nginx和keepalived
1、分别在两台前端服务器上安装nginx+keepalived,使用脚本如下:
install nginx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
|
#!/bin/bash
# Author: zhengyansheng
# Blog : http://467754239.blog.51cto.com
PCRE=
"/usr/local/pcre"
ZLIB=
"/usr/local/zlib"
OPENSSL=
"/usr/local/openssl"
NGINX=
"/usr/local/nginx"
pwd
=`
pwd
`
function
Install_nginx()
{
printf
"
Installing dependencies, please wailt ...... \n
"
yum -y
install
gcc gcc-c++ automake autoconf
make
unzip >
/dev/null
2>&1
id
nginx >
/dev/null
2>&1
[ $? -
ne
0 ] && (groupadd -r nginx ;
useradd
-g nginx -r -s
/sbin/nologin
nginx)
[ ! -d
"$PCRE"
] && {
unzip pcre-8.33.zip
cd
pcre-8.33
.
/configure
--prefix=$PCRE
make
&&
make
install
cd
..
}
[ ! -d
"$ZLIB"
] && {
tar
xf zlib-1.2.8.
tar
.gz
cd
zlib-1.2.8
.
/configure
--prefix=$ZLIB
make
&&
make
install
cd
..
}
[ ! -d
"$OPENSSL"
] && {
tar
xf openssl-1.0.0l.
tar
.gz
cd
openssl-1.0.0l
.
/config
--prefix=$OPENSSL
make
&&
make
install
cd
..
}
[ ! -d $NGINX ] && {
tar
xf nginx-1.4.7.
tar
.gz
cd
nginx-1.4.7
.
/configure
\
--prefix=$NGINX \
--user=nginx \
--group=nginx \
--with-pcre=..
/pcre-8
.33 \
--with-zlib=..
/zlib-1
.2.8 \
--with-openssl=..
/openssl-1
.0.0l \
--with-http_flv_module \
--with-http_ssl_module \
--with-http_mp4_module \
--with-http_stub_status_module \
--with-http_gzip_static_module
make
&&
make
install
cd
..
}
cat
> $NGINX
/conf/fastcgi_params
<< EOF
fastcgi_param GATEWAY_INTERFACE CGI
/1
.1;
fastcgi_param SERVER_SOFTWARE nginx;
fastcgi_param QUERY_STRING \$query_string;
fastcgi_param REQUEST_METHOD \$request_method;
fastcgi_param CONTENT_TYPE \$content_type;
fastcgi_param CONTENT_LENGTH \$content_length;
fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
fastcgi_param SCRIPT_NAME \$fastcgi_script_name;
fastcgi_param REQUEST_URI \$request_uri;
fastcgi_param DOCUMENT_URI \$document_uri;
fastcgi_param DOCUMENT_ROOT \$document_root;
fastcgi_param SERVER_PROTOCOL \$server_protocol;
fastcgi_param REMOTE_ADDR \$remote_addr;
fastcgi_param REMOTE_PORT \$remote_port;
fastcgi_param SERVER_ADDR \$server_addr;
fastcgi_param SERVER_PORT \$server_port;
fastcgi_param SERVER_NAME \$server_name;
EOF
cat
>
/etc/init
.d
/nginx
<< EOF
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /usr/local/nginx/conf/nginx.conf
# config: /usr/local/nginx/sbin/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
.
/etc/rc
.d
/init
.d
/functions
# Source networking configuration.
.
/etc/sysconfig/network
# Check that networking is up.
[
"\$NETWORKING"
=
"no"
] &&
exit
0
nginx=
"/usr/local/nginx/sbin/nginx"
prog=\$(
basename
\$nginx)
NGINX_CONF_FILE=
"/usr/local/nginx/conf/nginx.conf"
[ -f
/etc/sysconfig/nginx
] && .
/etc/sysconfig/nginx
lockfile=
/var/lock/subsys/nginx
make_dirs() {
# make required directories
user=`nginx -V 2>&1 |
grep
"configure arguments:"
|
sed
's/[^*]*--user=\([^ ]*\).*/\1/g'
-`
options=`\$nginx -V 2>&1 |
grep
'configure arguments:'
`
for
opt
in
\$options;
do
if
[ `
echo
\$opt |
grep
'.*-temp-path'
` ];
then
value=`
echo
\$opt |
cut
-d
"="
-f 2`
if
[ ! -d
"\$value"
];
then
# echo "creating" \$value
mkdir
-p \$value &&
chown
-R \$user \$value
fi
fi
done
}
start() {
[ -x \$nginx ] ||
exit
5
[ -f \$NGINX_CONF_FILE ] ||
exit
6
make_dirs
echo
-n \$
"Starting \$prog: "
daemon \$nginx -c \$NGINX_CONF_FILE
retval=\$?
echo
[ \$retval -
eq
0 ] &&
touch
\$lockfile
return
\$retval
}
stop() {
echo
-n \$
"Stopping \$prog: "
killproc \$prog -QUIT
retval=\$?
echo
[ \$retval -
eq
0 ] &&
rm
-f \$lockfile
return
\$retval
}
restart() {
configtest ||
return
\$?
stop
sleep
1
start
}
reload() {
configtest ||
return
\$?
echo
-n \$
"Reloading \$prog: "
killproc \$nginx -HUP
RETVAL=\$?
echo
}
force_reload() {
restart
}
configtest() {
\$nginx -t -c \$NGINX_CONF_FILE
}
rh_status() {
status \$prog
}
rh_status_q() {
rh_status >
/dev/null
2>&1
}
case
"\$1"
in
start)
rh_status_q &&
exit
0
\$1
;;
stop)
rh_status_q ||
exit
0
\$1
;;
restart|configtest)
\$1
;;
reload)
rh_status_q ||
exit
7
\$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q ||
exit
0
;;
*)
echo
\$
"Usage: \$0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit
2
esac
EOF
#Add nginx to server list
[ -x
/etc/init
.d
/nginx
]|| {
chmod
+x
/etc/init
.d
/nginx
chkconfig --add nginx
chkconfig nginx on
}
#Delete Compress packages
rm
-rf {nginx-1.4.7,openssl-1.0.0l,pcre-8.33,zlib-1.2.8}
[ -f
"$NGINX/conf/nginx.conf"
] &&
rm
-rf $NGINX
/conf/nginx
.conf
cp
$
pwd
/nginx
.conf $NGINX
/conf/nginx
.conf
}
function
fun_sure()
{
while
true
do
read
-p
"$1"
yn
if
[[
"$yn"
==
"y"
]];
then
$2
break
elif
[[
"$yn"
==
"n"
]];
then
break
else
printf
"\t Sorry,Please input {y | n} \n"
continue
fi
done
}
fun_sure
"Are you sure you want to install nginx web(y/n):"
"Install_nginx"
printf
"\n"
fun_sure
"If you want to start the nginx service(y/n):"
"/etc/init.d/nginx start"
|
nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
|
#The main configuration file
user nginx nginx;
worker_processes auto;
error_log logs
/nginx_error
.log crit;
pid logs
/nginx
.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
events
{
use epoll;
worker_connections 51200;
multi_accept on;
}
http
{
include mime.types;
default_type application
/octet-stream
;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 50m;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 256k;
gzip
on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text
/plain
application
/x-javascript
text
/css
application
/xml
;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_disable
"MSIE [1-6]\."
;
#limit_conn_zone $binary_remote_addr zone=perip:10m;
##If enable limit_conn_zone,add "limit_conn perip 10;" to server section.
server_tokens off;
#log format
log_format access
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for'
;
server {
listen 80;
server_name _;
root html;
index index.html index.php;
if
( $query_string ~*
".*[\;'\<\>].*"
){
return
404;
}
location ~ .*\.(php|php5)?$ {
fastcgi_pass 127.0.0.1:9000;
#fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi.conf;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|ico)$ {
expires 30d;
}
location ~ .*\.(js|css)?$ {
expires 7d;
}
access_log logs
/access_nginx
.log combined;
}
}
|
install keepalived
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
#!/bin/bash
# Author: zhengyansheng
# Blog : http://467754239.blog.51cto.com
function
Install_keepalived()
{
#printf "
# Installing dependencies, please wailt ...... \n
#"
yum -y
install
openssl openssl-devel popt-devel
[ ! -d
"/usr/local/keepalived"
] && {
tar
xf keepalived-1.2.7.
tar
.gz
cd
keepalived-1.2.7
.
/configure
--prefix=
/usr/local/keepalived
make
&&
make
install
cd
..
}
[ ! -d
"/etc/keepalived"
] &&
mkdir
/etc/keepalived
cp
/usr/local/keepalived/sbin/keepalived
/usr/sbin/
cp
/usr/local/keepalived/etc/sysconfig/keepalived
/etc/sysconfig/
cp
/usr/local/keepalived/etc/rc
.d
/init
.d
/keepalived
/etc/init
.d/
cp
/usr/local/keepalived/etc/keepalived/keepalived
.conf
/etc/keepalived/
rm
-rf keepalived-1.2.7
}
function
fun_sure()
{
while
true
do
read
-p
"$1"
yn
if
[[
"$yn"
==
"y"
]];
then
$2
break
elif
[[
"$yn"
==
"n"
]];
then
break
else
printf
"\t Sorry,Please input {y | n} \n"
continue
fi
done
}
fun_sure
"Are you sure you want to install keepalived server(y/n):"
"Install_keepalived"
printf
"\n"
|
五、配置keepalived.conf文件
Nginx-node1的keepalived.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
! Configuration File
for
keepalived
global_defs {
notification_email {
467754239@qq.com
}
notification_email_from root@linux.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.249
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth1
virtual_router_id 52
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.250
}
}
|
Nginx-node2的keepalived.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
! Configuration File
for
keepalived
global_defs {
notification_email {
467754239@qq.com
}
notification_email_from root@linux.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.249
}
}
vrrp_instance VI_2 {
state MASTER
interface eth1
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.250
}
}
|
在Nginx-node1和Nginx-node2上添加自动检测脚本并后台运行
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@nginx_node2 ~]
# cat /etc/keepalived/chk_nginx.sh
#!/bin/bash
while
:
do
nginxpid=`
ps
-C nginx --no-header |
wc
-l`
if
[ $nginxpid -
eq
0 ];
then
/usr/local/nginx/sbin/nginx
sleep
5
nginxpid=`
ps
-C nginx --no-header |
wc
-l`
echo
$nginxpid
if
[ $nginxpid -
eq
0 ];
then
/etc/init
.d
/keepalived
stop
fi
fi
sleep
5
done
|
后台运行监控脚本
1
|
# nohup /etc/keepalived/chk_nginx.sh &
|
在Nginx-node1和Nginx-node2上同时启动nginx和keepalived服务
1
2
|
# /etc/init.d/nginx start
# /etc/init.d/keepalived start
|
五、查看节点vip信息
Nginx-node1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@nginx_node1 ~]
# uname -n
nginx_node1
[root@nginx_node1 ~]
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link
/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1
/8
scope host lo
inet6 ::1
/128
scope host
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:48:56:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.205
/24
brd 192.168.1.255 scope global eth1
inet 192.168.1.249
/32
scope global eth1
inet6 fe80::20c:29ff:fe48:5616
/64
scope link
valid_lft forever preferred_lft forever
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:48:56:20 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10
/8
brd 10.255.255.255 scope global eth2
inet6 fe80::20c:29ff:fe48:5620
/64
scope link
valid_lft forever preferred_lft forever
|
Nginx-node2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@nginx_node2 ~]
# uname -n
nginx_node2
[root@nginx_node2 ~]
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link
/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1
/8
scope host lo
inet6 ::1
/128
scope host
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:7e:99:4a brd ff:ff:ff:ff:ff:ff
inet 192.168.1.206
/24
brd 192.168.1.255 scope global eth1
inet 192.168.1.250
/32
scope global eth1
inet6 fe80::20c:29ff:fe7e:994a
/64
scope link
valid_lft forever preferred_lft forever
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:7e:99:54 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11
/8
brd 10.255.255.255 scope global eth2
inet6 fe80::20c:29ff:fe7e:9954
/64
scope link
valid_lft forever preferred_lft forever
|
六、测试访问
1、测试满足的条件
1
2
3
4
5
6
|
[root@nginx_node1 ~]
# service nginx status
nginx (pid 39476 39474) 正在运行...
[root@nginx_node1 ~]
# service keepalived status
keepalived (pid 39592) 正在运行...
[root@nginx_node1 ~]
# jobs
[1]+ Running
nohup
.
/chk_nginx
.sh & (wd:
/etc/keepalived
)
|
2、为nginx提供不同的测试页面
1
2
3
4
5
|
Nginx-node1
# echo "<h1>Nginx_node1 192.168.1.205</h1>" > /usr/local/nginx/html/index.html
Nginx-node2
# echo "<h1>Nginx_node2 192.168.1.206</h1>" > /usr/local/nginx/html/index.html
|
3、浏览器访问vip
首先,浏览器访问vip1地址如下图:
然后,停止Nginx-node1上{nginx|keepalived}服务,然后继续访问vip1地址如下图:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
需要多次停止nginx服务,否则keepalived上的vip不会浮动的
[root@nginx_node1 ~]
# killall -9 nginx
[root@nginx_node1 ~]
# killall -9 nginx
[root@nginx_node1 ~]
# killall -9 nginx
nginx: 没有进程被杀死
[root@nginx_node1 ~]
# killall -9 nginx
nginx: 没有进程被杀死
[root@nginx_node1 ~]
# killall -9 nginx
vip浮动的过程中,日志的的记录信息如下:
[root@nginx_node1 ~]
# tail -f /var/log/messages
Jan 15 18:25:16 nginx_node1 Keepalived[39592]: Stopping Keepalived v1.2.7 (01
/15
,2015)
Jan 15 18:25:16 nginx_node1 Keepalived_vrrp[39595]: VRRP_Instance(VI_1) sending 0 priority
Jan 15 18:25:16 nginx_node1 Keepalived_vrrp[39595]: VRRP_Instance(VI_1) removing protocol VIPs.
|
最后,查看节点的vip信息
Nginx-node1
结论:nginx服务停止后,由于后台监控脚本{chk_nginx.sh}监控不到nginx的进程的存在,于是就停止了keepalived服务,那么随后的就是vip的浮动。从而备用节点掌控vip。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@nginx_node1 ~]
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link
/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1
/8
scope host lo
inet6 ::1
/128
scope host
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:48:56:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.205
/24
brd 192.168.1.255 scope global eth1
inet6 fe80::20c:29ff:fe48:5616
/64
scope link
valid_lft forever preferred_lft forever
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:48:56:20 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10
/8
brd 10.255.255.255 scope global eth2
inet6 fe80::20c:29ff:fe48:5620
/64
scope link
valid_lft forever preferred_lft forever
|
Nginx-node2
结论:已成功接掌vip,进而主备节点的切换时成功的。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
root@nginx_node2 ~]
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link
/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1
/8
scope host lo
inet6 ::1
/128
scope host
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:7e:99:4a brd ff:ff:ff:ff:ff:ff
inet 192.168.1.206
/24
brd 192.168.1.255 scope global eth1
inet 192.168.1.250
/32
scope global eth1
inet 192.168.1.249
/32
scope global eth1
inet6 fe80::20c:29ff:fe7e:994a
/64
scope link
valid_lft forever preferred_lft forever
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:7e:99:54 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11
/8
brd 10.255.255.255 scope global eth2
inet6 fe80::20c:29ff:fe7e:9954
/64
scope link
valid_lft forever preferred_lft forever
|
那么此时在启动Nginx-node1节点上的keepalived服务,看看vip的变化情况
启动keepalived服务
1
2
|
[root@nginx_node1 ~]
# service keepalived start
正在启动 keepalived: [确定]
|
日志信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
[root@nginx_node1 ~]
# tail -f /var/log/messages
Jan 15 18:33:12 nginx_node1 Keepalived[42500]: Starting Keepalived v1.2.7 (01
/15
,2015)
Jan 15 18:33:12 nginx_node1 Keepalived[42501]: Starting Healthcheck child process, pid=42503
Jan 15 18:33:12 nginx_node1 Keepalived[42501]: Starting VRRP child process, pid=42504
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Interface queue is empty
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: No such interface, eth2
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Netlink reflector reports IP 192.168.1.205 added
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Netlink reflector reports IP 10.0.0.10 added
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Netlink reflector reports IP fe80::20c:29ff:fe48:5616 added
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Interface queue is empty
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Netlink reflector reports IP fe80::20c:29ff:fe48:5620 added
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Registering Kernel netlink reflector
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Registering Kernel netlink
command
channel
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: No such interface, eth2
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Netlink reflector reports IP 192.168.1.205 added
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Netlink reflector reports IP 10.0.0.10 added
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Netlink reflector reports IP fe80::20c:29ff:fe48:5616 added
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Netlink reflector reports IP fe80::20c:29ff:fe48:5620 added
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Registering Kernel netlink reflector
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Registering Kernel netlink
command
channel
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Registering gratuitous ARP shared channel
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Opening
file
'/etc/keepalived/keepalived.conf'
.
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Configuration is using : 7399 Bytes
Jan 15 18:33:12 nginx_node1 Keepalived_healthcheckers[42503]: Using LinkWatch kernel netlink reflector...
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Opening
file
'/etc/keepalived/keepalived.conf'
.
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: chk_nginx no match, ignoring...
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: chk_nginx no match, ignoring...
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Configuration is using : 70101 Bytes
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: Using LinkWatch kernel netlink reflector...
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: VRRP_Instance(VI_2) Entering BACKUP STATE
Jan 15 18:33:12 nginx_node1 Keepalived_vrrp[42504]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]
Jan 15 18:33:13 nginx_node1 Keepalived_vrrp[42504]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 15 18:33:14 nginx_node1 Keepalived_vrrp[42504]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 15 18:33:14 nginx_node1 Keepalived_vrrp[42504]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 15 18:33:14 nginx_node1 Keepalived_healthcheckers[42503]: Netlink reflector reports IP 192.168.1.249 added
Jan 15 18:33:14 nginx_node1 Keepalived_vrrp[42504]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth1
for
192.168.1.249
Jan 15 18:33:19 nginx_node1 Keepalived_vrrp[42504]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth1
for
192.168.1.249
|
查看Nginx-node1上的vip信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
[root@nginx_node1 ~]
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link
/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1
/8
scope host lo
inet6 ::1
/128
scope host
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:48:56:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.205
/24
brd 192.168.1.255 scope global eth1
inet 192.168.1.249
/32
scope global eth1
inet6 fe80::20c:29ff:fe48:5616
/64
scope link
valid_lft forever preferred_lft forever
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link
/ether
00:0c:29:48:56:20 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10
/8
brd 10.255.255.255 scope global eth2
inet6 fe80::20c:29ff:fe48:5620
/64
scope link
valid_lft forever preferred_lft forever
|
七、DNS轮询
1、阐述DNS轮询
万网是互联网上使用最多的域名供应商,我们可以登录http://www.net.cn
2、个人对DNS轮询的理解
通过以上配置可以看到【www.nihao.com】对应了两个ip地址,分别为192.168.1.249和192.168.1.250,这两个ip地址就是我们架构中的vip地址,DNS服务器将解析请求按照A记录的顺序,逐一分配到192.168.1.249和192.168.1.250的地址上,这样就完成了所谓的DNS轮询的功能。
八、补充部分
1、php-fpm.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
# grep -Ev '^$|^;|^ ' /usr/local/php/etc/php-fpm.conf (8GB 内存)
[global]
pid = run
/php-fpm
.pid
error_log = log
/php-fpm
.log
log_level = notice
[www]
listen = 0.0.0.0:9000
user = www
group = www
pm = static
pm.max_children = 400
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 35
pm.max_requests = 500
|
php-fpm对于进程的管理存在两种风格static和dynamic,和之前的版本的进程管理其实还是一样的,只是将apache-like改成了dynamic这样更容易理解;
如果设置成static php-fpm进程数自始至终都是pm.max_children指定的数量,不再增加或减少;
如果设置成dynamic 则php-fpm进程数是动态的,最开始是pm.start_servers指定的数量,如果请求较多则会自动增加;
保证空闲的进程数不小于pm.min_spare_servers,如果进程数较多也会进行相应清理,保证多余的进程数不多于pm.max_spare_servers。
这两种不同的进程管理方式,可以根据服务器的实际需求来进行调整。
这里先说一下涉及到这个的几个参数,他们分别是pm、pm.max_children、pm.start_servers、pm.min_spare_servers和pm.max_spare_servers;
pm表示使用那种方式,有两个值可以选择,就是static(静态)或者dynamic(动态),在更老一些的版本中dynamic被称作apache-like;
这个要注意看配置文件的说明。下面4个参数的意思分别为:
pm.max_children:静态方式下开启的php-fpm进程数量;
pm.start_servers:动态方式下的起始php-fpm进程数量;
pm.min_spare_servers:动态方式下的最小php-fpm进程数量;
pm.max_spare_servers:动态方式下的最大php-fpm进程数量。
如果dm设置为static,那么其实只有pm.max_children这个参数生效,系统会开启设置数量的php-fpm进程;
如果dm设置为 dynamic,那么pm.max_children参数失效,后面3个参数生效;
系统会在php-fpm运行开始 的时候启动pm.start_servers个php-fpm进程,然后根据系统的需求动态;
在pm.min_spare_servers和 pm.max_spare_servers之间调整php-fpm进程数。
那么,对于我们的服务器,选择哪种执行方式比较好呢?事实上跟Apache一样,运行的PHP程序在执行完成后,或多或少会有内存泄露的问题;
这也是为什么开始的时候一个php-fpm进程只占用3M左右内存,运行一段时间后就会上升到20-30M的原因了。
对于内存大的服务器(比如8G以上)来说,指定静态的max_children实际上更为妥当,因为这样不需要进行额外的进程数目控制,会提高效率;
因为频繁开关php-fpm进程也会有时滞,所以内存够大的情况下开静态效果会更好;
数量也可以根据 内存/30M 得到,比如8GB内存可以设置为100,那么php-fpm耗费的内存就能控制在 2G-3G的样子;
如果内存稍微小点,比如1G,那么指定静态的进程数量更加有利于服务器的稳定,这样可以保证php-fpm只获取够用的内存;
将不多的 内存分配给其他应用去使用,会使系统的运行更加畅通。
对于小内存的服务器来说,比如256M内存的VPS,即使按照一个20M的内存量来算,10个php-cgi进程就将耗掉200M内存;
那系统的崩 溃就应该很正常了,因此应该尽量地控制php-fpm进程的数量,大体明确其他应用占用的内存后,给它指定一个静态的小数量;
会让系统更加平稳一些或者使用动态方式,因为动态方式会结束掉多余的进程,可以回收释放一些内存;
所以推荐在内存较少的服务器或VPS上使用具体最大数量根据 内存/20M 得到。
比如说512M的VPS,建议pm.max_spare_servers设置为20,至于pm.min_spare_servers,则建议根据服务器的负载情况来设置,比较合适的值在5~10之间.
详细的php-fpm.conf 配置参数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
|
FPM 配置文件为php-fpm.conf,其语法类似 php.ini 。
全局配置段([global])
pid string
PID文件的位置. 默认为空.
error_log string
错误日志的位置. 默认: 安装路径
#INSTALL_PREFIX#/log/php-fpm.log.
log_level string
错误级别. 可用级别为: alert(必须立即处理), error(错误情况), warning(警告情况), notice(一般重要信息), debug(调试信息). 默认: notice.
emergency_restart_threshold int
如果子进程在emergency_restart_interval设定的时间内收到该参数设定次数的SIGSEGV 或者 SIGBUS退出信息号,则FPM会重新启动。 0 表示
'关闭该功能'
. 默认值: 0 (关闭).
emergency_restart_interval mixed
emergency_restart_interval用于设定平滑重启的间隔时间. 这么做有助于解决加速器中共享内存的使用问题. 可用单位: s(秒), m(分), h(小时), 或者 d(天). 默认单位: s(秒). 默认值: 0 (关闭).
process_control_timeout mixed
设置子进程接受主进程复用信号的超时时间. 可用单位: s(秒), m(分), h(小时), 或者 d(天) 默认单位: s(秒). 默认值: 0.
daemonize boolean
设置FPM在后台运行. 设置
'no'
将 FPM 保持在前台运行用于调试. 默认值:
yes
.
运行配置区段([www])
在FPM中,可以使用不同的设置来运行多个进程池。 这些设置可以针对每个进程池单独设置。
listen string
设置接受FastCGI请求的地址. 可用格式为:
'ip:port'
,
'port'
,
'/path/to/unix/socket'
. 每个进程池都需要设置.
listen.backlog int
设置 listen(2) 的半连接队列长度.
'-1'
表示无限制. 默认值: -1.
listen.allowed_clients string
设置允许连接到FastCGI的服务器IPV4地址. 等同于PHP FastCGI (5.2.2+)中的 FCGI_WEB_SERVER_ADDRS环境变量. 仅对TCP监听起作用. 每个地址是用逗号分隔. 如果没有设置或者为空,则允许任何服务器请求连接. 默认值: any.
listen.owner string
如果使用,表示设置Unix套接字的权限. 在Linux中,读写权限必须设置,以便用于WEB服务器连接. 在很多BSD派生的系统中可以忽略权限允许自由连接. 默认值: 运行所使用的用户合租, 权限为0666.
listen.group string
参见 listen.owner.
listen.mode string
参见 listen.owner.
user string
FPM 进程运行的Unix用户. 必须设置.
group string
FPM 进程运行的Unix用户组. 如果没有设置,则默认用户的组被使用.
pm string
设置进程管理器如何管理子进程. 可用值: static, dynamic. 必须设置.
static - 子进程的数量是固定的 (pm.max_children).
dynamic - 子进程的数量在下面配置的基础上动态设置: pm.max_children, pm.start_servers, pm.min_spare_servers, pm.max_spare_servers.
pm.max_children int
子进程的数量,pm 设置为 static 时表示创建的, pm 设置为 dynamic 时表示最大可创建的. 必须设置.
该选项设置可以同时提供服务的请求数限制. 类似 Apache 的 mpm_prefork 中 MaxClients 的设置和 普通PHP FastCGI中的 PHP_FCGI_CHILDREN 环境变量.
pm.start_servers
in
设置启动时创建的子进程数目. 仅在 pm 设置为 dynamic 时使用. 默认值: min_spare_servers + (max_spare_servers - min_spare_servers) / 2.
pm.min_spare_servers int
设置空闲服务进程的最低数目. 仅在 pm 设置为 dynamic 时使用. 必须设置.
pm.max_spare_servers int
设置空闲服务进程的最大数目. 仅在 pm 设置为 dynamic 时使用. 必须设置.
pm.max_requests int
设置每个子进程重生之前服务的请求数. 对于可能存在内存泄漏的第三方模块来说是非常有用的. 如果设置为
'0'
则一直接受请求. 等同于 PHP_FCGI_MAX_REQUESTS 环境变量. 默认值: 0.
pm.status_path string
FPM状态页面的网址. 如果没有设置, 则无法访问状态页面. 默认值: none.
ping
.path string
FPM监控页面的
ping
网址. 如果没有设置, 则无法访问
ping
页面. 该页面用于外部检测FPM是否存活并且可以响应请求. 请注意必须以斜线开头 (/).
ping
.response string
用于定义
ping
请求的返回相应. 返回为 HTTP 200 的 text
/plain
格式文本. 默认值: pong.
request_terminate_timeout mixed
设置单个请求的超时中止时间. 该选项可能会对php.ini设置中的
'max_execution_time'
因为某些特殊原因没有中止运行的脚本有用. 设置为
'0'
表示
'Off'
. Available
units
: s(econds)(default), m(inutes), h(ours), or d(ays). Default value: 0.
request_slowlog_timeout mixed
当一个请求该设置的超时时间后,就会将对应的PHP调用堆栈信息完整写入到慢日志中. 设置为
'0'
表示
'Off'
. 可用单位: s(秒)(默认), m(分), h(小时), 或者 d(天). 默认值: 0.
slowlog string
慢请求的记录日志. 默认值:
#INSTALL_PREFIX#/log/php-fpm.log.slow.
rlimit_files int
设置文件打开描述符的rlimit限制. 默认值: 系统定义值.
rlimit_core int
设置核心rlimit最大限制值. 可用值:
'unlimited'
、0或者正整数. 默认值: 系统定义值.
chroot string
启动时的Chroot目录. 所定义的目录需要是绝对路径. 如果没有设置, 则chroot不被使用.
chdir string
设置启动目录,启动时会自动Chdir到该目录. 所定义的目录需要是绝对路径. 默认值: 当前目录,或者/目录(chroot时).
catch_workers_output boolean
重定向运行过程中的stdout和stderr到主要的错误日志文件中. 如果没有设置, stdout 和 stderr 将会根据FastCGI的规则被重定向到
/dev/null
. 默认值: 空.
|
2、vrrp_script模块实现对集群资源的监控
keepalived的vrrp_script模块专门用于对集群服务资源进行监控,与此模块一起使用的还有track_script模块,在此模块中可以引入监控脚本,命令组合、shell语句等,以实现对服务、端口多方面的监控。trap_script模块主要用来调用"vrrp_script"模块使keepalived执行对集群服务资源的检测
此外,在vrrp_script模块中,还可以定义对服务资源检测的时间间隔、权重等参数,通过vrrp_script和track_script组合,可以实现对集群资源的监控并改变集群的优先级,进而实现keepalived的主、备及节点切换。
1
2
3
4
5
6
7
|
vrrp_script check_nginx {
script
"killall -0 nginx"
interval 2
}
track_script {
check_nginx
}
|
所以由以上功能,进而使keepalived.conf得到了进一步改进
Nginx-node1的keepalived.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
! Configuration File
for
keepalived
global_defs {
notification_email {
467754239@qq.com
}
notification_email_from root@linux.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script
"/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
|