Linux下Nginx+Resin负载均衡,session问题解决实例
转载:http://blog.chinaunix.net/uid-14007440-id-3150269.html
https://guying1028.iteye.com/blog/1746685
本文通过cookie_session来判别,解决了Nginx session不同步问题。
nginx_upstream_jvm_route模块(打补丁时可能会失败,需要查找到合适的nginx版本)
这个模块通过session cookie的方式来获取session粘性。如果在cookie和url中并没有session,则这只是个简单的round-robin 负载均衡。
1. 一开始请求过来,没有带session信息,jvm_route就根据round robin的方法,发到一台resin上面。
2. resin添加上session 信息,并返回给客户。
3. 用户再提交此请求,jvm_route看到session中有后端服务器的名称,它就把请求转到对应的服务器上。
暂时jvm_route模块还不支持默认fair的模式。jvm_route的工作模式和fair是冲突的。对于某个特定用户,当一直为他服务的 resin宕机后,
默认情况下它会重试max_fails的次数,如果还是失败,就重新启用round robin的方式,而这种情况下就会导致用户的session丢失。
总的说来,jvm_route是通过session_cookie这种方式来实现session粘性,将特定会话附属到特定resin上,从而解决session不同步问题,
但无法解决宕机后会话转移问题。
测试环境:
server1 服务器上安装了 nginx + resin
server2 服务器上只安装了 resin
server1 IP 地址: 192.168.179.200
server2 IP 地址: 192.168.179.201
在server1 上安装配置 nginx + nginx_upstream_jvm_route
nginx-0.7.59.tar.gz的下载地址:
http://nginx.org
使用nginx-upstream-jvm-route-read-only模块(下载地址需要翻墙才能下载):
https://code.google.com/archive/p/nginx-upstream-jvm-route/downloads
resin的下载地址:
http://caucho.com/download/resin-4.0.53.tar.gz
安装nginx
首先缺少辅助环境(pcre、zlib、openssl、gcc)
yum -y install pcre-devel
yum -y install openssl openssl-devel
yum install -y zlib-devel
yum -y install gcc
#nginx放在/usr/local目录下
cd nginx
#给nginx打补丁,文件在当前目录
patch -p0 < ../nginx_upstream_jvm_route/jvm_route.patch
添加用户
useradd www
安装nginx
#配置
./configure --user=www --group=www --with-http_ssl_module --add-module=/usr/local/nginx_upstream_jvm_route
#编译
make
#安装
make install
打补丁时可能报错
bash: patch: command not found...
安装patch即可
yum -y install patch
运行补丁命令再报错(版本不兼容)
[root@mylinuxclone nginx]# patch -p0 < ../nginx_upstream_jvm_route/jvm_route.patch
patching file src/http/ngx_http_upstream.c
Hunk #1 succeeded at 5670 with fuzz 1 (offset 1933 lines).
Hunk #2 succeeded at 5768 with fuzz 2 (offset 1939 lines).
Hunk #3 succeeded at 5787 with fuzz 1 (offset 1918 lines).
Hunk #4 FAILED at 3922.
Hunk #5 succeeded at 5901 with fuzz 1 (offset 1949 lines).
1 out of 5 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream.c.rej
patching file src/http/ngx_http_upstream.h
Hunk #1 FAILED at 85.
Hunk #2 FAILED at 97.
Hunk #3 FAILED at 111.
3 out of 3 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream.h.rej
查看官网的wiki
nginx-upstream-jvm-route - nginx_with_resin.wiki
地址:https://code.google.com/archive/p/nginx-upstream-jvm-route/wikis/nginx_with_resin.wiki
Nginx's configure
``` upstream backend { server 192.168.0.100 srun_id=a; server 192.168.0.101 srun_id=b;
jvm_route $cookie_JSESSIONID;
} ```
Resin's configure
For all resin servers <server id="a" address="192.168.0.100" port="8080"> <http id="" port="80"/> </server> <server id="b" address="192.168.0.101" port="8080"> <http id="" port="80"/> </server>
And start each resin instances like this:
server a
shell $> /usr/local/resin/bin/httpd.sh -server a start
server b
shell $> /usr/local/resin/bin/httpd.sh -server b start
分别在两台机器上 安装 resin
cd resin
./configure --prefix=/usr/local/resin
make
make install
配置两台机器 的 resin
cd /usr/local/resin/conf
vim resin.conf
## 查找 <http address="*" port="8080"/>
## 注释掉 <!--http address="*" port="8080"/-->
## 查找 <server id="" address="127.0.0.1" port="6800">
## 替换成
server1
<server id="a" address="192.168.179.200" port="8080">
<http id="" port="80"/>
</server>
server2
<server id="b" address="192.168.179.201" port="8080">
<http id="" port="80"/>
</server>
编写index.jsp的内容
cd /usr/local/resin/webapps/ROOT/
#备份
mv index.jsp index.jsp.bak
vim index.jsp
index.jsp
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
<%
%>
<html>
<head>
</head>
<body>
121
<!--server2 这里为 162 -->
<br />
<%out.print(request.getSession()) ;%>
<!--输出session-->
<br />
<%out.println(request.getHeader("Cookie")); %>
<!--输出Cookie-->
</body>
</html>
重启
4.整合 ngxin resin
cd /usr/local/nginx/conf
mv nginx.conf nginx.bak
vim nginx.conf
## 以下是配置 ###
user www www;
worker_processes 4;
error_log logs/nginx_error.log crit;
pid /usr/local/nginx/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
events
{
use epoll;
worker_connections 2048;
}
http
{
upstream backend {
server 192.168.179.200:8080 srun_id=a;
#### 这里 srun_id=a 对应的是 server1 resin 配置里的 server id="a"
server 192.168.179.201:8080 srun_id=b;
#### 这里 srun_id=b 对应的是 server2 resin 配置里的 server id="b"
jvm_route $cookie_JSESSIONID|sessionid;
}
include mime.types;
default_type application/octet-stream;
#charset gb2312;
charset UTF-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 20m;
limit_rate 1024k;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
gzip on;
#gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
server
{
listen 80;
server_name 192.168.179.200;
index index.html index.htm index.jsp;
root /var/www;
location ~ .*\.jsp$
{
proxy_pass http://backend ;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$
{
expires 30d;
}
location ~ .*\.(js|css)?$
{
expires 1h;
}
location /stu {
stub_status on;
access_log off;
}
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
# access_log off;
}
}
重启nginx
5.测试,打开浏览器,输入 http://192.168.179.200/index.jsp
session 显示 aXXXXX 访问的是 200 服务器也就是 server1,因为是第一次访问所以Cookie 没有获得,刷新一下看他是否轮询会访问到201 server2.
刷新 N 遍后仍然是 200,也就是补丁起作用了,cookie 值 也获得了,为了测试,我又打开了 “火狐浏览器”(因为session 和 cookie问题所以从新打开别的浏览器),输入网址:
http://192.168.179.200/index.jsp
显示的是 201,session 值 是以 bXXX 开头的,刷新 N遍后:
仍然是 201 server 2服务器!!大家测试的时候如果有疑问可一把 nginx 配置文件的
srun_id=a srun_id=b 去掉,然后在访问,就会知道 页面是轮询访问得了!!