一、配置jdk环境java的编译环境------server2和server3同时配置
jdk是JAVA的开发编译环境是java语言的软件开发工具包主要用于移动设备的嵌入式设备上的java应用程序
jdk的安装基础过程
1将jdk的包解压在指定路径 使用-C来指定路径
2进入指定的路径给jdk解压后的目录做个软连接
3编辑系统的环境变量使得java命令可以使用更改后让文件生效让环境变量文件即时生效使用source命令
4编辑java测试文件编译执行
1. get jdk-7u79-linux-x64.tar.gz -C /usr/local/(from 老吴)
tar zxf jdk-7u79-linux-x64.tar.gz -C /usr/local/
2. ln -s /usr/local/jdk1.7.0_79/ /usr/local/java##软链接方便版本升级
3. vim /etc/profile##编辑系统的环境变量全局变量所有用户都可以用而~/.bash_profile是局部变量只针对某些用户加载才可以使用
在配置文件末端加上
export JAVA_HOME=/usr/local/java
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/bin
4. source /etc/profile##让更改在当前位置生效
查看是否生效
[root@server2 local]# echo $PATH
/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/java/bin
[root@server2 local]# echo $CLASSPATH
.:/usr/local/java/lib:/usr/local/java/jre/lib
5.写java证明环境配置是ok的
[root@server2 ~]# vim test.java
public class test {
public static void main(String[] arge) {
System.out.println("Hello World!");
}
}
[root@server2 ~]# javac test.java
[root@server2 ~]# java test
Hello World!
二、安装tomcatjsp页面的解释------server2和server3同时配置
Tomcat服务器是一个免费的开放源代码的web应用服务器其属于轻量级应用服务器很适合开发和调试jsp程序
Tomcat是 Apache服务器的扩展但运行时它是独立运行的所以当tomcat运行时它实际上作为一个与Apache独立的进程单独运行的
Tomcat 的安装很简单其基本步骤如下
1从官网 down 一个安装包 例如apache-tomcat-7.0.37.tar.gz
2解压 也可以在解压的时候加上你要解压到的目录等参数
3进入tomcat的bin目录开启服务就好
4测试可以编写一个测试页
1.get apache-tomcat-7.0.37.tar.gz(from 老吴)
tar zxf apache-tomcat-7.0.37.tar.gz -C /usr/local/
2. ln -s /usr/local/apache-tomcat-7.0.37/ /usr/local/tomcat
3. cd /usr/local/tomcat/bin/
./startup.sh##开启tomcat服务
4. netstat -antlp #此时8080端口打开
tcp 0 0 :::8080 :::* LISTEN 1122/java
5.cd /usr/local/tomcat/webapps/ROOT
vim test.jsp
the time is: <%=new java.util.Date() %>
在server3中
tar zxf apache-tomcat-7.0.37.tar.gz -C /usr/local/
ln -s /usr/local/apache-tomcat-7.0.37/ /usr/local/tomcat
cd tomcat/bin/
./startup.sh
netstat -antlp
测试在网页搜索 172.25.78.2:8080/test.jsp
三、session共享机制tomcat将缓存信息备份存在memcache中
1.tomcat和nginx的使用
[root@server1 conf]# cd /usr/local/lnmp/nginx/conf/
[root@server1 conf]# /etc/init.d/php-fpm start
Starting php-fpm done
[root@server1 conf]# nginx##开启服务
[root@server1 conf]# vim nginx.conf
[root@server1 conf]# nginx -t
nginx: the configuration file /usr/local/lnmp/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/lnmp/nginx/conf/nginx.conf test is successful
[root@server1 conf]# nginx -s reload
(18)
http {
upstream westos{
ip_hash;##开启ip_hash固定一个主机访问的后端服务器
server 172.25.39.2:8080;
server 172.25.39.3:8080;
}
(90) location ~ \.jsp$ {
proxy_pass http://westos; ##所有以.jsp结尾的都转换
}
[root@server1 conf]# cat nginx.conf-----所有没有大规模注释的文件内容如下
#user nobody;
worker_processes 2;
worker_cpu_affinity 01 10;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 65535;
}
http {
upstream westos{
ip_hash;
server 172.25.78.2:8080;
server 172.25.78.3:8080;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.php index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi.conf;
}
location ~ \.jsp$ {
proxy_pass http://westos;
}
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate cert.pem;
ssl_certificate_key cert.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root html;
index index.html index.htm;
}
}
server {
listen 80;
server_name www.westos.org;
location / {
proxy_pass http://westos;
}
}
}
测试页码的编写
[root@server2 ~]# cd /usr/local/tomcat/webapps/ROOT/
[root@server2 ROOT]# vim test.jsp
内容从老吴文档复制
<%@ page contentType="text/html; charset=GBK" %>
<%@ page import="java.util.*" %>
<html><head><title>Cluster App Test</title></head>
<body>
Server Info:
<%
out.println(request.getLocalAddr() + " : " + request.getLocalPort()+"<br>");%>
<%
out.println("<br> ID " + session.getId()+"<br>");
String dataName = request.getParameter("dataName");
if (dataName != null && dataName.length() > 0) {
String dataValue = request.getParameter("dataValue");
session.setAttribute(dataName, dataValue);
}
out.print("<b>Session list</b>");
Enumeration e = session.getAttributeNames();
while (e.hasMoreElements()) {
String name = (String)e.nextElement();
String value = session.getAttribute(name).toString();
out.println( name + " = " + value+"<br>");
Enumeration e = session.getAttributeNames();
while (e.hasMoreElements()) {
String name = (String)e.nextElement();
String value = session.getAttribute(name).toString();
out.println( name + " = " + value+"<br>");
System.out.println( name + " = " + value);
}
%>
<form action="test.jsp" method="POST">
name:<input type=text size=20 name="dataName">
<br>
key:<input type=text size=20 name="dataValue">
<br>
<input type=submit>
</form>
</body>
</html>
scp test.jsp 172.25.39.3:/root/tomcat/webapps/ROOT##给3
中复制相同的页
###nginx+tomcat+memcache####共享的实现
1.
get {---from 老吴
asm-3.2.jar
kryo-1.04.jar
kryo-serializers-0.10.jar
memcached-session-manager-1.6.3.jar
memcached-session-manager-tc6-1.6.3.jar
memcached-session-manager-tc7-1.6.3.jar
minlog-1.2.jar
msm-kryo-serializer-1.6.3.jar
reflectasm-1.01.jar
spymemcached-2.7.3.jar }
因为使用的tomcat是7的所以把6的删掉
2.把这些东西放到/usr/local/tomcat/lib
3.打开server2和server3的memc
/etc/init.d/memecached start
4. vim /usr/local/tomcat/conf/context.xml
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:172.25.39.2:11211,n2:172.25.39.3:11211"
failoverNodes="n1"##在server2和server3上分别弄n1和n2如果tomcat1 故障了那么就n1取缓存数据否则就是取自己的缓存---n1就相当于上课讲解的memc2
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
<T1><T2>
. \ / .
. X .
. / \ .
<M1><M2>
Tomcat-1 (T1) 将 session 存储在 memcached-2 (T2)上。只有当 M2 不可用时,T1 才将 session 存储在 memcached-1 上(M1 是 T1 failoverNode)。使用这种配置的好处是,当 T1 和 M1 同时崩溃时也不会丢失 session 会话,避免单点故障。
5. 打开tomcat服务
/usr/local/tomcat/bin/startup.sh
##关闭服务
/usr/local/tomcat/bin/shutdown.sh
cd /usr/local/tomcat
tail -f logs/catalina.out 试验服务打开是否成功
(INFO: MemcachedSessionService finished initialization, sticky true, operation timeout 1000, with node ids [n1] and failover node ids [n2])证明成功
6. yum install telnet -y两台都做-----telnet作用监控
测试
172.25.78.1/test.jsp
telnet localhost 11211远程监控本地的11211端口 监控存储
[root@server3 lib]# telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get 48844C8F70F022944863813004730A7E-n2
VALUE 48844C8F70F022944863813004730A7E-n2 2048 136
W]hB8]h01]h)]h)#48844C8F70F022944863813004730A7E-n2user2456user3789user8888usr1123
END
集群-----------------保证时间同步
ricci是集群的管理工具
一个客户端两个集群节点
*********保证两个server纯净***********
1.先重新配置yum把这些全部配置才能满足集群的需要
yum里面的文件
[HighAvailability]高可用
name=HighAvailability
baseurl=http://172.25.39.250/rhel6.5/HighAvailability
gppcheck=0
[LoadBalancer]负载均衡
name=LoadBalancer
baseurl=http://172.25.39.250/rhel6.5/LoadBalancer
gpgcheck=0
[ResilientStorage]存储
name=ResilientStorage
baseurl=http://172.25.39.250/rhel6.5/ResilientStorage
gpgcheck=0
[ScalableFileSystem]大文件系统支持
name=ScalableFileSystem
baseurl=http://172.25.39.250/rhel6.5/ScalableFileSystem
gpgcheck=0
yum repolist##更新yum
*****************************************************
*repo id repo name status
*HighAvailability HighAvailability 56
*LoadBalancer LoadBalancer 4
*ResilientStorage ResilientStorage 62
*ScalableFileSystem ScalableFileSystem 7
*rhel6.5 Red Hat Enterprise Linux 3,690
*repolist: 3,819
*****************************************************
2.安装 yum install -y ricci
passwd ricci在企业六里面必须要给密码
3. /etc/init.d/ricci start##开启服务
chkconfig ricci on##开机自启动
4.[root@server4 ~]# clustat##查看集群的状态
Cluster Status for hahaha @ Sat Jul 22 17:02:21 2017
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server1 1 Online
server4 2 Online, Local
5.相同配置server1
a.vim /etc/yum.repos.d/rhel-source.repo
yum repolist
b. yum install -y ricci
passwd ricci
/etc/init.d/ricci start
chkconfig ricci on
c. yum install -y luci##图形操作界面-------测试的那个服务器安装就行(会发现全是用python写的)
/etc/init.d/luci start
chkconfig luci on##开机自启动
d. clustat
在浏览器
https://172.25.39.1:8084----在哪个主机安装了luci用哪个主机测试
本文转自 yab109 51CTO博客,原文链接:http://blog.51cto.com/12768057/1950223,如需转载请自行联系原作者