HTTP是无状态的 HTTP1.1和HTTP1.0相比较而言,最大的区别就是增加了持久连接支持(貌似最新的 http1.0 可以显示的指定 keep-alive),但还是无状态的,或者说是不可以信任的。 如果浏览器或者服务器在其头信息加入了这行代码 Connection:keep-alive TCP连接在发送后将仍然保持打开状态,于是,浏览器可以继续通过相同的连接发送请求。保持连接节省了为每个请求建立新连接所需的时间,还节约了带宽。 实现长连接要客户端和服务端都支持长连接。 如果web服务器端看到这里的值为“Keep-Alive”,或者看到请求使用的是HTTP 1.1(HTTP 1.1默认进行持久连接),它就可以利用持久连接的优点,当页面包含多个元素时(例如Applet,图片),显著地减少下载所需要的时间。要实现这一点, web服务器需要在返回给客户端HTTP头信息中发送一个Content-Length(返回信息正文的长度)头,最简单的实现方法是:先把内容写入ByteArrayOutputStream,然 后在正式写出内容之前计算它的大小 无论客户端浏览器 (Internet Explorer) 还是 Web 服务器具有较低的 KeepAlive 值,它都将是限制因素。例如,如果客户端的超时值是两分钟,而 Web 服务器的超时值是一分钟,则最大超时值是一分钟。客户端或服务器都可以是限制因素 在header中加入 --Connection:keep-alive Http Keep-Alive seems to be massively misunderstood. Here's a short description of how it works, under both 1.0 and 1.1 HTTP/1.0Under HTTP 1.0, there is no official specification for how keepalive operates. It was, in essence, tacked on to an existing protocol. If the browser supports keep-alive, it adds an additional header to the request: Connection: Keep-Alive Then, when the server receives this request and generates a response, it also adds a header to the response: Connection: Keep-Alive Following this, the connection is NOT dropped, but is instead kept open. When the client sends another request, it uses the same connection. This will continue until either the client or the server decides that the conversation is over, and one of them drops the connection. HTTP/1.1Under HTTP 1.1, the official keepalive method is different. All connections are kept alive, unless stated otherwise with the following header: Connection: close The Connection: Keep-Alive header no longer has any meaning because of this. Additionally, an optional Keep-Alive: header is described, but is so underspecified as to be meaningless. Avoid it. Not reliableHTTP is a stateless protocol - this means that every request is independent of every other. Keep alive doesn’t change that. Additionally, there is no guarantee that the client or the server will keep the connection open. Even in 1.1, all that is promised is that you will probably get a notice that theconnection is being closed. So keepalive is something you should not write your application to rely upon. KeepAlive and POSTThe HTTP 1.1 spec states that following the body of a POST, there are to be no additional characters. It also states that "certain" browsers may not follow this spec, putting a CRLF after the body of the POST. Mmm-hmm. As near as I can tell, most browsers follow a POSTed body with a CRLF. There are two ways of dealing with this: Disallow keepalive in the context of a POST request, or ignore CRLF on a line by itself. Most servers deal with this in the latter way, but there's no way to know how a server will handle it without testing.
Java应用 client用apache的commons-httpclient来执行method 。
常用的apache、resin、tomcat等都有相关的配置是否支持keep-alive。
tomcat中可以设置:
The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well asHTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100. 解释1 所谓长连接指建立SOCKET连接后不管是否使用都保持连接,但安全性较差,
解释2 长连接就是指在基于tcp的通讯中,一直保持连接,不管当前是否发送或者接收数据。
解释3 长连接和短连接这个概念好像只有移动的CMPP协议中提到了,其他的地方没有看到过。
解释4 短连接:比如http的,只是连接、请求、关闭,过程时间较短,服务器若是一段时间内没有收到请求即可关闭连接。 最近在看“ 服务器推送技术”,在B/S结构中,通过某种magic使得客户端不需要通过轮询即可以得到服务端的最新信息(比如股票价格),这样可以节省大量的带宽。
传统的轮询技术对服务器的压力很大,并且造成带宽的极大浪费。如果改用ajax轮询,可以降低带宽的负荷(因为服务器返回的不是完整页面),但是对服务器的压力并不会有明显的减少。
而推技术(push)可以改善这种情况。但因为
HTTP连接的特性(短暂,必须由客户端发起),使得推技术的实现比较困难,常见的做法是通过延长http连接的寿命,来实现push。
接下来自然该讨论如何延长
http连接的寿命,最简单的自然是死循环法:
【servlet代码片段】
public void doGet(Request req, Response res) {
PrintWriter out = res.getWriter();
……
正常输出页面
……
out.flush();
while (true) {
out.print("输出更新的内容");
out.flush();
Thread.sleep(3000);
}
}
如果使用观察者模式则可以进一步提高性能。
但是这种做法的缺点在于客户端请求了这个servlet后,web服务器会开启一个线程执行servlet的代码,而servlet由迟迟不肯结束,造成该线程也无法被释放。于是乎,一个客户端一个线程,当客户端数量增加时,服务器依然会承受很大的负担。
要从根本上改变这个现象比较复杂,目前的趋势是从web服务器内部入手,用
nio(JDK 1.4提出的java.nio包)改写request/response的实现,再利用线程池增强服务器的资源利用率,从而解决这个问题,目前支持这一非J2EE官方技术的服务器有
Glassfish和
Jetty(后者只是听说,没有用过)。
目前也有一些框架/工具可以帮助你实现推功能,比如pushlets。不过没有深入研究。
这两天准备学习一下Glassfish中对Comet(彗星:某人给服务器推送技术起的名字)的支持,呵呵。
|