- 浏览: 43946 次
文章分类
最新评论
Nginx+keepalived+tomcat实现tomcat高可用性负载均衡
Nginx+keepalived+tomcat实现tomcat高可用性负载均衡试验环境:
CentOS5.4、pcre-8.12、nginx-upstream-jvm-route-0.1、nginx-1.0.10、apache-tomcat-7.0.23 、keepalived-1.1.17.tar.gz、jdk-7u2-linux-x64.tar.gz
主nginx服务器地址:10.29.9.200
辅nginx服务器地址:10.29.9.201
tomcat1: 10.29.9.202
tomcat2: 10.29.9.203VIP: 10.29.9.188拓扑如下: 1.分别在10.29.9.200和10.29.9.201上面安装nginxtar zxf pcre-8.12.tar.gz
cd pcre-8.12
./configure
make;make install
下载下面的插件安装,否则nginx无法识别tomcat中jvmRoute,从而无法达到session复制的效果。wget http://friendly.sinaapp.com/LinuxSoft/nginx-upstream-jvm-route-0.1.tar.gz
tar xzf nginx-upstream-jvm-route-0.1.tar.gz
tar xzf nginx-1.0.10.tar.gz
cd nginx-1.0.10
patch -p0 <../nginx_upstream_jvm_route/jvm_route.patch
./configure --prefix=/usr/local/nginx --with-http_stub_status_module \ --with-pcre=/root/pcre-8.12 --add-module=../nginx_upstream_jvm_route/
#--with-pcre=指向的是pcre的源码包
make;make install2.配置nginxvim /usr/local/nginx/conf/nginx.conf
user www www; worker_processes 4; error_log /home/wwwlogs/nginx_error.log crit; pid /usr/local/nginx/logs/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { upstream backend { server 10.29.9.202:8080 srun_id=tomcat1; server 10.29.9.203:8080 srun_id=tomcat2; jvm_route $cookie_JSESSIONID|sessionid reverse; }include mime.types;default_type application/octet-stream; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 50m; sendfile on; tcp_nopush on;keepalive_timeout 60; tcp_nodelay on; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 256k; gzip on;gzip_min_length 1k; gzip_buffers 4 16k;charset UTF-8 gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml;gzip_vary on; #limit_zone crawler $binary_remote_addr 10m; server { listen 80; server_name www.8090u.com; index index.jsp index.htm index.html; root /home/wwwroot/;location / { proxy_pass http://backend; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { expires 30d; } location ~ .*\.(js|css)?$ { expires 1h; } location /Nginxstatus { stub_status on; access_log off; } log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; access_log /home/wwwlogs/access.log access; } include vhost/*.conf; } 3. 分别在两台nginx服务器上安装keepalivedtar zxvf keepalived-1.1.17.tar.gz cd keepalived-1.1.17 ./configure --prefix=/usr/local/keepalived make && make install cp /usr/local/keepalived/sbin/keepalived /usr/sbin/ cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/mkdir /etc/keepalived cd /etc/keepalived/ 主keepalived配置vim keepalived.conf vrrp_script chk_http_port { script "/opt/nginx_pid.sh" interval 2 weight 2} vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 mcast_src_ip 10.29.9.200 priority 150 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 10.29.9.188 }}
辅keepalived 配置vrrp_script chk_http_port { script "/opt/nginx_pid.sh" interval 2 weight 2} vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 mcast_src_ip 10.29.9.201 priority 100 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 10.29.9.188 }}启动keepalived,检查虚拟IP是否邦定,在主keepalived[root@xenvps0 ~]# /etc/init.d/keepalived start启动 keepalived: [确定][root@xenvps0 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:36:68:a4:fc brd ff:ff:ff:ff:ff:ff inet 10.29.9.200/24 brd 10.29.9.255 scope global eth0inet 10.29.9.188/32 scope global eth0在eth0上面我们已经看到虚拟IP 10.29.9.188已经邦定成功 4.安装tomcat
1)安装tomcat_1
tar zxvf apache-tomcat-7.0.23.tar.gz
mv apache-tomcat-7.0.23 /usr/local/tomcat
2)安装tomcat_2,步骤同1)5.分别在tomcat服务器安装jdktar zxvf jdk-7u2-linux-x64.tar.gz mv jdk1.7.0_02 /usr/local/jdk1.7.0_02
cat >>/etc/profile <<EOF
export JAVA_HOME=/usr/local/jdk1.7.0_02
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin EOF; source /etc/profile //使环境变量立即生效5.tomcat集群配置
tomcat1配置:
修改conf/server.xml配置文件<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
< Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
< Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
< Channel className="org.apache.catalina.tribes.group.GroupChannel">
< Membership className="org.apache.catalina.tribes.membership.McastService"
address="224.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
< Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="10.29.9.202" //tomcat1 所在服务器的IP地址
port="4000" //端口号
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
< Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
< Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />
< /Sender>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
< /Channel>
< Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
< Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
< Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
< ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
< ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
< /Cluster>
在<host>…</host>添加下面这句:<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" /> tomcat2配置:
修改conf/server.xml配置文件<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
< Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
< Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
< Channel className="org.apache.catalina.tribes.group.GroupChannel">
< Membership className="org.apache.catalina.tribes.membership.McastService"
address="224.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
< Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="10.29.9.203" //tomcat2所在服务器IP
port="4001" //端口号不能和tomcat1重复
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
< Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
< Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />
< /Sender>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
< /Channel>
< Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
< Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>< Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
< ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
< ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
< /Cluster>在<host>…</host>添加下面这句:<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" /> 6.session配置 修改web应用里面WEB-INF目录下的web.xml文件,加入标签
<distributable/>
直接加在</web-app>之前
开启网卡组播功能:
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth07.创建jsp测试页面
mkdir /opt/projectcd /opt/project
vi index.jsp
< html>
< title>
tomcat1 jsp
< /title>
< %
String showMessage="Hello,This is 10.29.9.202 server";
out.print(showMessage);
%>
< /html>
----------------------------
mkdir /opt/project cd /opt/project
vi index.jsp
< html>
< title>
tomcat2 jsp
< /title>
< %
String showMessage=" Hello,This is 10.29.9.203 server";
out.print(showMessage);
%>
CentOS5.4、pcre-8.12、nginx-upstream-jvm-route-0.1、nginx-1.0.10、apache-tomcat-7.0.23 、keepalived-1.1.17.tar.gz、jdk-7u2-linux-x64.tar.gz
主nginx服务器地址:10.29.9.200
辅nginx服务器地址:10.29.9.201
tomcat1: 10.29.9.202
tomcat2: 10.29.9.203VIP: 10.29.9.188拓扑如下: 1.分别在10.29.9.200和10.29.9.201上面安装nginxtar zxf pcre-8.12.tar.gz
cd pcre-8.12
./configure
make;make install
下载下面的插件安装,否则nginx无法识别tomcat中jvmRoute,从而无法达到session复制的效果。wget http://friendly.sinaapp.com/LinuxSoft/nginx-upstream-jvm-route-0.1.tar.gz
tar xzf nginx-upstream-jvm-route-0.1.tar.gz
tar xzf nginx-1.0.10.tar.gz
cd nginx-1.0.10
patch -p0 <../nginx_upstream_jvm_route/jvm_route.patch
./configure --prefix=/usr/local/nginx --with-http_stub_status_module \ --with-pcre=/root/pcre-8.12 --add-module=../nginx_upstream_jvm_route/
#--with-pcre=指向的是pcre的源码包
make;make install2.配置nginxvim /usr/local/nginx/conf/nginx.conf
user www www; worker_processes 4; error_log /home/wwwlogs/nginx_error.log crit; pid /usr/local/nginx/logs/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { upstream backend { server 10.29.9.202:8080 srun_id=tomcat1; server 10.29.9.203:8080 srun_id=tomcat2; jvm_route $cookie_JSESSIONID|sessionid reverse; }include mime.types;default_type application/octet-stream; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 50m; sendfile on; tcp_nopush on;keepalive_timeout 60; tcp_nodelay on; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 256k; gzip on;gzip_min_length 1k; gzip_buffers 4 16k;charset UTF-8 gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml;gzip_vary on; #limit_zone crawler $binary_remote_addr 10m; server { listen 80; server_name www.8090u.com; index index.jsp index.htm index.html; root /home/wwwroot/;location / { proxy_pass http://backend; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { expires 30d; } location ~ .*\.(js|css)?$ { expires 1h; } location /Nginxstatus { stub_status on; access_log off; } log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; access_log /home/wwwlogs/access.log access; } include vhost/*.conf; } 3. 分别在两台nginx服务器上安装keepalivedtar zxvf keepalived-1.1.17.tar.gz cd keepalived-1.1.17 ./configure --prefix=/usr/local/keepalived make && make install cp /usr/local/keepalived/sbin/keepalived /usr/sbin/ cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/mkdir /etc/keepalived cd /etc/keepalived/ 主keepalived配置vim keepalived.conf vrrp_script chk_http_port { script "/opt/nginx_pid.sh" interval 2 weight 2} vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 mcast_src_ip 10.29.9.200 priority 150 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 10.29.9.188 }}
辅keepalived 配置vrrp_script chk_http_port { script "/opt/nginx_pid.sh" interval 2 weight 2} vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 mcast_src_ip 10.29.9.201 priority 100 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 10.29.9.188 }}启动keepalived,检查虚拟IP是否邦定,在主keepalived[root@xenvps0 ~]# /etc/init.d/keepalived start启动 keepalived: [确定][root@xenvps0 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:36:68:a4:fc brd ff:ff:ff:ff:ff:ff inet 10.29.9.200/24 brd 10.29.9.255 scope global eth0inet 10.29.9.188/32 scope global eth0在eth0上面我们已经看到虚拟IP 10.29.9.188已经邦定成功 4.安装tomcat
1)安装tomcat_1
tar zxvf apache-tomcat-7.0.23.tar.gz
mv apache-tomcat-7.0.23 /usr/local/tomcat
2)安装tomcat_2,步骤同1)5.分别在tomcat服务器安装jdktar zxvf jdk-7u2-linux-x64.tar.gz mv jdk1.7.0_02 /usr/local/jdk1.7.0_02
cat >>/etc/profile <<EOF
export JAVA_HOME=/usr/local/jdk1.7.0_02
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin EOF; source /etc/profile //使环境变量立即生效5.tomcat集群配置
tomcat1配置:
修改conf/server.xml配置文件<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
< Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
< Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
< Channel className="org.apache.catalina.tribes.group.GroupChannel">
< Membership className="org.apache.catalina.tribes.membership.McastService"
address="224.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
< Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="10.29.9.202" //tomcat1 所在服务器的IP地址
port="4000" //端口号
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
< Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
< Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />
< /Sender>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
< /Channel>
< Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
< Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
< Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
< ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
< ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
< /Cluster>
在<host>…</host>添加下面这句:<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" /> tomcat2配置:
修改conf/server.xml配置文件<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
< Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
< Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
< Channel className="org.apache.catalina.tribes.group.GroupChannel">
< Membership className="org.apache.catalina.tribes.membership.McastService"
address="224.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
< Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="10.29.9.203" //tomcat2所在服务器IP
port="4001" //端口号不能和tomcat1重复
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
< Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
< Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />
< /Sender>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
< Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
< /Channel>
< Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
< Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>< Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
< ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
< ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
< /Cluster>在<host>…</host>添加下面这句:<Context path="" docBase="/opt/project " reloadable="false" crossContext="true" /> 6.session配置 修改web应用里面WEB-INF目录下的web.xml文件,加入标签
<distributable/>
直接加在</web-app>之前
开启网卡组播功能:
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth07.创建jsp测试页面
mkdir /opt/projectcd /opt/project
vi index.jsp
< html>
< title>
tomcat1 jsp
< /title>
< %
String showMessage="Hello,This is 10.29.9.202 server";
out.print(showMessage);
%>
< /html>
----------------------------
mkdir /opt/project cd /opt/project
vi index.jsp
< html>
< title>
tomcat2 jsp
< /title>
< %
String showMessage=" Hello,This is 10.29.9.203 server";
out.print(showMessage);
%>
相关推荐
Nginx+keepalived+tomcat实现性负载均衡
Nginx+KeepAlived+Tomcat负载架构 这个可以实现tomcat集群,并且可以使服务器主备机进行切换。如果其中一台机器当机,会自动切换到另一台机器。客服端感受不到服务器当掉。非常实用。
Nginx++Keepalived+Tomcat负载 均衡 &动 静 分 离。配置
通过nginx+keepalived+tomcat实现服务器负载均衡的高可用方案,解决传统负载均衡服务器宕机后无法自行切换从而导致雪崩效应的场景
Nginx+keepalived+tomcat实现性负载均衡
Linux下搭建Nginx+Keepalived+Redis+Tomcat实现session共享 、负载均衡、高可用
第三十三章:Lvs+Keepalived+Nginx+Tomcat高可用集群1
这里配置Keepalived + Nginx + Tomcat + Redis的架构,其中:keepalived用于管理Virtual IP,与nginx一起搭配实现高可用性的反向代理前端;后端使用Tomcat管理web服务,并利用Redis实现session共享。
本文档详细介绍了如何利用keepalived实现Nginx的高可用和负载均衡的步骤,且已经在生产环境中部署验证通过
CentOS系统安装配置Nginx+keepalived实现负载均衡
lvs+keepalived+nginx+tomcat实现高性能负载均衡集群
nginx+keepalived使用文档.nginx+keepalived使用文档.
通过lvs+keepalived+nginx+tomcat实现服务负载均衡。 通过memcached实现不同服务器之间session共享。 包含jar文件。 本人亲测实验通过。
压缩包中有几个安装包,和几个配置文件示例, 文档中非常详细,写了差不多将近30页
keepalived+Nginx+Tomcat负载均衡配置文档,生产实施原文档,欢迎大家参考。
lvs keepalived+ngnix+tomcat 实现高性能负载均衡集群
运用nginx的负载均衡功能实现后端web服务器之间的负载均衡,运用keepalived的功能实现nginx服务器的高可用,比较了nginx和lvs的负载均衡的优缺点。
Nginx+keepalived双机热备(主从模式),根据文档操作,本人部署和测试过,可运行
keepalived+nginx+tomcat+redis+mysql所需的jdk包,之前csdn不让上传超过200m得只能分开上传。
随着你的网站业务量的增长你网站的服务器压力越来越大?需要负载均衡方案!商业的硬件如F5又太贵,你们又是创业型互联公司...我们利用LVS+Keepalived基于完整开源软件的架构可以为你提供一个负载均衡及高可用的服务器。