nginx进程基于于Master+Slave(worker)多进程模型,自 身具有非常稳定的子进程管理功能。在Master进程分配模式下,Master进程永远不进行业务处理,只是进行任务分发,从而达到Master进程的存 活高可靠性,Slave(worker)进程所有的业务信号都由主进程发出,Slave(worker)进程所有的超时任务都会被Master中止,属于 非阻塞式任务模型。
Keepalived是Linux下面实现VRRP 备份路由的高可靠性运行件。基于Keepalived设计的服务模式能够真正做到主和备份故障时IP瞬间无缝交接。二者结合,可以构架出比较稳定的软件lb方案。
现在有两台虚拟机ServerA和ServerB. 两个对外提供Web服务器的虚IP(VIP)192.168.200.100和192.168.200.200, 虚IP用在keepalived的配置中, 网卡接口配置有内网IP.
ServerA:eth0: 192.168.200.128VIP: 192.168.200.100 (www.srt.com.cn)ServerB:eth0: 192.168.200.129VIP: 192.168.200.200 (www.srtedu.com)
如果两台服务器都正常地提供网络服务, 那么, 发往192.168.200.100的服务请求会被ServerA处理, 发往192.168.200.200的服务请求会被ServerB处理. 假设只有ServerB出现故障, 那么, 所有的请求都由ServerA进行处理. 当只有ServerA出现故障时, 也是同理.
软件环境:本文使用的Linux为RHEL 5.2、软件:keepalived-1.1.19、nginx-0.7.64
安装Keepalived
wget tar zxvf keepalived-1.1.19.tar.gzcd keepalived-1.1.19./configure --prefix=/usr/local/keepalivedmake make installcp /usr/local/keepalived/sbin/keepalived /usr/sbin/cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/mkdir /etc/keepalivedcd /etc/keepalived/
配置Keepalived
接下来, 是最重要的修改配置文件/etc/keepalived/keepalived.conf.
global_defs { notification_email { } notification_email_from smtp_server 211.155.225.210 smtp_connect_timeout 30 router_id srtweb
}
vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 smtp_alert authentication { auth_type PASS auth_pass srt_L7switch } virtual_ipaddress { 192.168.200.100 } }
vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 50 advert_int 1 smtp_alert authentication { auth_type PASS auth_pass srt_L7switch } virtual_ipaddress { 192.168.200.200 } }
执行 /etc/rc.d/init.d/keepalived start 启动keepalived后, 执行ip a, 你将看到类似的信息:
[root@node1 ~]# ip a2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:01:11:2a brd ff:ff:ff:ff:ff:ff inet 192.168.200.128/24 brd 192.168.200.255 scope global eth0 inet 192.168.200.100/32 scope global eth0 inet 192.168.200.200/32 scope global eth0 inet6 fe80::20c:29ff:fe01:112a/64 scope link
可以看到, ServerA的网卡绑定了两个虚IP 192.168.200.100和192.168.200.200, 这时在第3台机器上ping这两个IP, 是可以通的.
然后, 按上面的方法安装ServerB. ServerB的keepalived.conf配置和ServerA基本相同, 但是, 把state MASTER和state BACKUP调换, priority 100和priority 50调换,router_id改为srtedu. 启动ServerB后, 再到ServerA上运行ip a, 你将看到
[root@node1 ~]# ip a2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:01:11:2a brd ff:ff:ff:ff:ff:ff inet 192.168.200.128/24 brd 192.168.200.255 scope global eth0 inet 192.168.200.100/32 scope global eth0 inet6 fe80::20c:29ff:fe01:112a/64 scope link
192.168.200.200已经不见了, 因为被ServerB使用了, ServerB是这个IP的MASTER, 它有优先使用权. 这样, 两个IP的网络数据分别被两台服务器处理. 现在验证ServerB在出故障的情况, 将ServerB的网线拔掉, 然后在ServerA上执行ip a, 你将看到
[root@node1 ~]# ip a2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:01:11:2a brd ff:ff:ff:ff:ff:ff inet 192.168.200.128/24 brd 192.168.200.255 scope global eth0 inet 192.168.200.100/32 scope global eth0 inet 192.168.200.200/32 scope global eth0 inet6 fe80::20c:29ff:fe01:112a/64 scope link
192.168.200.200又被ServerA使用了.
安装Nginx
1、创建供Nginx使用的组和帐号:
/usr/sbin/groupadd www -g 48/usr/sbin/useradd -u 48 -g www www2、编译安装rewrite模块支持包
wget tar zxvf pcre-7.7.tar.gzcd pcre-7.7/./configuremake && make installcd ../3、编译安装Nginx
wget tar zxvf nginx-0.7.64.tar.gzcd nginx-0.7.64/./configure --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module --with-http_flv_modulemake && make installcd ../4、备份默认nginx.conf配置文件
mv /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx.old 5、创建Nginx配置文件#vi /usr/local/nginx/conf/nginx.confuser www www;worker_processes 8;pid /var/run/nginx.pid;worker_rlimit_nofile 51200;
events{ use epoll; worker_connections 51200;}
http{ include mime.types; default_type application/octet-stream; charset gb2312; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; sendfile on; tcp_nopush on; keepalive_timeout 60;
tcp_nodelay on; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on;
upstream srtweb { server 192.168.200.201:80; server 192.168.200.202:80; server 192.168.200.203:80; }
upstream srtedu { server 192.168.200.211:80; server 192.168.200.212:80; server 192.168.200.213:80; }
server { listen 80; server_name srt.com.cn;
location /{ proxy_pass ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } access_log /var/log/access_srtweb.log combined;
}
server { listen 80; server_name srtedu.com;
location /{ proxy_pass ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } access_log /var/log/access_srtedu.log combined;
}
}
#第1个server同时也作为默认主机
6、启动
ulimit -SHn 102400/usr/local/nginx/sbin/nginx
7、同样的在Server B上进行1~6步骤的安装配置Nginx操作。
后记: 对于中、小型企业,如果没有资金去购买昂贵的四/七层负载均衡交换机,那么Nginx是不错的七层负载均衡选择,并且可以通过Nginx + Keepalived实现Nginx 负载均衡器双机互备,任意一台机器发生故障,对方都能够将虚拟IP接管过去。
这个环境用webbench做压力测试,在10000并发的情况下
下载地址:
[root@lab webbench-1.5]# webbench -c 10000 -t 10 http://www.srt.com.cn/Webbench - Simple Web Benchmark 1.5Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.Benchmarking: GET 10000 clients, running 10 sec.Speed=746202 pages/min, 2785675 bytes/sec.Requests: 124331 susceed, 36 failed由于其中之一的nginx主机只有64M内存,所以不敢用高于10000的并发测试,如果同志们想用于生产环境,可考虑更高的并发测试(webbench最多支持30000并发)