前几天看到
http://code.google.com 有个
nginx_upstream_jvm_route 项目,看完介绍后,很兴奋,因为是个中国人写的补丁,解决 session 不同步问题,不过他不是共享,也不是同步,而是通过 cookie_session 来判别!通过与作者的email 通信,对这次测试有了很大的帮助,因为我第一次搞 jsp 的东西,再这里感谢 Weibin Yao 还有 Cluster服务技术群2 的张涛大哥!
测试环境:
server1 服务器上安装了 nginx + resin
server2 服务器上只安装了 resin
server1 IP 地址: 192.168.6.121
server2 IP 地址: 192.168.6.162
安装步骤:
1. 在server1 上安装配置 nginx +
nginx_upstream_jvm_route
shell $> wget -c
http://sysoev.ru/nginx/nginx-0.7.61.tar.gz
shell $>
svn checkout http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/ nginx-upstream-jvm-route-read-only
shell $> tar zxvf nginx-0.7.61
shell $> cd nginx-0.7.61
shell $> patch
-
p0
<
../nginx-upstream-jvm-route-read-only/jvm_route.patch
shell $> useradd www
shell $> ./configure --user=www --group=www --prefix=/usr/local/webserver/nginx --with-http_stub_status_module --with-http_ssl_module --add-module=/root/nginx-upstream-jvm-route-read-only
shell $> make
shell $> make install
2.分别在两台机器上 安装 resin
### 修改环境变量###
shell $> vim /etc/profile
###在 umask 022 下填加以下###
JAVA_HOME=/usr/lib/jvm/java-6-sun
export JAVA_HOME
export JAVA_HOME
JRE_HOME="${JAVA_HOME}"/jre
export JRE_HOME
export JRE_HOME
CLASSPATH=.:"${JAVA_HOME}"/lib/tools.jar:"${JAVA_HOME}"/lib/dt.jar${RESIN_HOME}"/lib/resin.jar:"${CLASSPATH}
export CLASSPATH
export CLASSPATH
RESIN_HOME=/usr/local/resin
export RESIN_HOME
PATH="${JAVA_HOME}"/bin:"${PATH}"
export PATH
export RESIN_HOME
PATH="${JAVA_HOME}"/bin:"${PATH}"
export PATH
shell $> wget -c
http://www.caucho.com/download/resin-3.1.9.tar.gz
shell $> tar zxvf resin-3.1.9.tar.gz
shell $> cd resin-3.1.9
shell $> ./configure --prefix=/usr/local/resin
shell $> make
shell $> make install
3. 配置两台机器 的 resin
shell $> cd /usr/local/resin
shell $> cd conf
shell $> vim resin.conf
## 查找 <http address="*" port="8080"/>
## 注释掉 <!--http address="*" port="8080"/-->
## 查找 <server id="" address="127.0.0.1" port="6800">
## 替换成
<server id="a" address="192.168.6.121" port="6800">
<!-- server2 address=192.168.6.162 -->
<http id="" port="8080"/>
</server>
<http id="" port="8080"/>
</server>
<server id="b" address="192.168.6.121" port="6801">
<!-- server2 address=192.168.6.162 -->
<http id="" port="8081"/>
</server>
shell $> cd /usr/local/resin/webapps/ROOT/
<http id="" port="8081"/>
</server>
shell $> cd /usr/local/resin/webapps/ROOT/
shell $> mv index.jsp index.jsp.bak
shell $> vim index.jsp
## 填入以下内容
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
<%
%>
<%
%>
<html>
<head>
</head>
<body>
121
<head>
</head>
<body>
121
<!--server2 这里为 162 -->
<br />
<%out.print(request.getSession()) ;%>
<br />
<%out.print(request.getSession()) ;%>
<!--输出session-->
<br />
<%out.println(request.getHeader("Cookie")); %>
<!--输出Cookie-->
<br />
<%out.println(request.getHeader("Cookie")); %>
<!--输出Cookie-->
</body>
</html>
</html>
###重启 resin 服务#####
### server1 服务器#####
shell $> /usr/local/resin/bin/httpd.sh -server a start
###注意 如果没修改 环境变量会报错
### server2 服务器 ####
shell $> /usr/local/resin/bin/httpd.sh -server b start
### 注意的是 server2 服务器 只启动 server_id b ###
4.整合 ngxin resin
shell $> cd /usr/local/nginx/conf
shell $> mv nginx.conf nginx.bak
shell $> vim nginx.conf
## 以下是配置 ###
user www www;
worker_processes 4;
error_log logs/nginx_error.log crit;
pid /usr/local/nginx/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
worker_rlimit_nofile 51200;
events
{
use epoll;
worker_connections 2048;
}
{
use epoll;
worker_connections 2048;
}
http
{
upstream backend {
server 192.168.6.121:8080 srun_id=a;
{
upstream backend {
server 192.168.6.121:8080 srun_id=a;
#### 这里 srun_id=a 对应的是 server1 resin 配置里的 server id="a"
server 192.168.6.162:8081 srun_id=b;
server 192.168.6.162:8081 srun_id=b;
#### 这里 srun_id=b 对应的是 server2 resin 配置里的 server id="b"
jvm_route $cookie_JSESSIONID|sessionid;
}
}
include mime.types;
default_type application/octet-stream;
default_type application/octet-stream;
#charset gb2312;
charset UTF-8;
charset UTF-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 20m;
limit_rate 1024k;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 20m;
limit_rate 1024k;
sendfile on;
tcp_nopush on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
gzip on;
#gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
#gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
server
{
listen 80;
server_name 192.168.6.121;
index index.html index.htm index.jsp;
root /var/www;
location ~ .*\.jsp$
{
proxy_pass http://backend;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}
{
listen 80;
server_name 192.168.6.121;
index index.html index.htm index.jsp;
root /var/www;
location ~ .*\.jsp$
{
proxy_pass http://backend;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$
{
expires 30d;
}
{
expires 30d;
}
location ~ .*\.(js|css)?$
{
expires 1h;
}
{
expires 1h;
}
location /stu {
stub_status on;
access_log off;
}
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
# access_log off;
}
stub_status on;
access_log off;
}
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
# access_log off;
}
}
5.测试,打开浏览器,输入
http://192.168.6.121/index.jsp
session 显示 aXXXXX 访问的是 121 服务器也就是 server1,因为是第一次访问所以Cookie 没有获得,刷新一下看他是否轮询会访问到 162 server2.
刷新 N 遍后仍然是 121,也就是补丁起作用了,cookie 值 也获得了,为了测试,我又打开了 “火狐浏览器”(因为session 和 cookie问题所以从新打开别的浏览器),输入网址:
显示的是 162 ,session 值 是以 bXXX 开头的,刷新 N遍后:
仍然是 162 server 2服务器!!大家测试的时候如果有疑问可一把 nginx 配置文件的
srun_id=a srun_id=b 去掉,然后在访问,就会知道 页面是轮询访问得了!!
PS:最后 谢谢
Weibin Yao 指导
还有 Cluster服务技术群2 的张涛大哥对 JSP 代码的帮助!!!
我上传的补丁清在 linux 系统上解压,因为 51cto 不支持 gz格式,所以我就改了一后缀名,在linux 系统上执行
shell $> tar zxvf nginx-upstream-jvm-route-read-only.rar
就可以了!
tomcat 的解决方法 README 上有:
1.For resin
upstream backend {
server 192.168.0.100 srun_id=a;
server 192.168.0.101 srun_id=b;
server 192.168.0.102 srun_id=c;
server 192.168.0.103 srun_id=d;
upstream backend {
server 192.168.0.100 srun_id=a;
server 192.168.0.101 srun_id=b;
server 192.168.0.102 srun_id=c;
server 192.168.0.103 srun_id=d;
jvm_route $cookie_JSESSIONID|sessionid;
}
2.For tomcat
upstream backend {
server 192.168.0.100 srun_id=a;
server 192.168.0.101 srun_id=b;
server 192.168.0.102 srun_id=c;
server 192.168.0.103 srun_id=d;
}
2.For tomcat
upstream backend {
server 192.168.0.100 srun_id=a;
server 192.168.0.101 srun_id=b;
server 192.168.0.102 srun_id=c;
server 192.168.0.103 srun_id=d;
jvm_route $cookie_JSESSIONID|sessionid reverse;
}
}
本文转自Deidara 51CTO博客,原文链接:http://blog.51cto.com/deidara/193887,如需转载请自行联系原作者