您的位置:首页 > 其它

关于SSH框架的集群,负载均衡,以及缓存集群的配置搭建 (一)

2011-04-19 09:18 615 查看
最近配置一个SSH项目的集群,遇到不少麻烦。网上查找了不少资料但是都不齐全。为了减少新手少走弯路,特此分享出来。老鸟可以略过。。板砖轻拍。

首先说下框架

Spring 2.5+struts 2.2.1+hibernate 3.2 hibernate 的二级缓存实现使用的是Ehcache 1.6.2

阅读此文前,请先确认你的项目相关实体类进行了序列化也就是继承了java.io.Serializable 接口。

详细信息请Google之,不详细解释。

环境 JDK 1.6 , tomcat 5.5.30 ,Apache 2.2.17

其他版本的配置基本相同。不同的地方会指示出来。相关软件的下载安装略过。

一.apache+tomcat负载均衡和集群的配置。

1.修改apache的conf下的httpd.conf 添加如下内容

#加载 jk mod(需要另外下载)
LoadModule jk_module modules/mod_jk-1.2.31-httpd-2.2.3.so


mod_jk-1.2.31-httpd-2.2.3.so 可以从apache官网下载,请注意对应的apache版本,并放入Apache的modules目录。

#mod_JK的日志文件存放位置。
JkLogFile "logs/mod_jk.log"


日志文件,最好写上,可以快速排错。知道哪里出问题了。

#加载JK配置文件 在conf目录下的mod_jk.conf
Include conf/mod_jk.conf


mod_jk的配置文件加载。下面会详细讲解。

上述代码片段均添加到apache中 httpd.conf配置文件中,插入位置,请参照httpd.conf配置本身

2.在conf目录 新建 mod_jk.conf 文件,内容如下

#加载mod_jk Module
#指定 workers.properties文件路径
JkWorkersFile conf/workers.properties
#负载控制器(可添加相应文件,例如:*.jpg,*.html 给相应定义的控制器)
#把相关请求传给tomcat处理。你可以添加别的。根据你的项目添加
#下面代码表示把所有请求转发给tomcat处理
JkMount /* controller
#其他示例,可以添加多个
#把jsp文件请求转发给tomcat处理
#JkMount /*.jsp controller
#把Action请求转发给tomcat处理
#JkMount /*.do controller


什么样的请求分发给tomcat处理。可以实现动态文件跟静态文件的分离,例如,只是把jsp,跟Action的请求转发给Tomcat处理。

其他js,html,jpg,css等静态文件可以给apache处理。(静态文件需要拷贝至apache的工作目录,或者修改配置文件指向静态文件所在目录)

3.在conf中新建workers.properties文件

worker.list = controller,tomcat1,tomcat2  #server 列表
#========tomcat1========
worker.tomcat1.port=9009         #ajp13 端口号,在tomcat下server.xml配置,默认8009
worker.tomcat1.host=localhost  #tomcat的主机地址,如不为本机,请填写ip地址
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor = 1   #server的加权比重,值越高,分得的请求越多
#========tomcat2========
worker.tomcat2.port=8009       #ajp13 端口号,在tomcat下server.xml配置,默认8009
worker.tomcat2.host=localhost  #tomcat的主机地址,如不为本机,请填写ip地址
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor = 1   #server的加权比重,值越高,分得的请求越多
#========controller,负载均衡控制器========
worker.controller.type=lb
worker.controller.balance_workers=tomcat1,tomcat2  #指定分担请求的tomcat
#session粘性
worker.controller.sticky_session=false
worker.controller.sticky_session_force=false


这两个tomcat是在本机配置的,所以修改了相应的端口号,如果是在不同机器中配置,则不需修改,详细配置参考

下面的tomcat配置

需要注意的是session粘性的配置。

sticky_session 和sticky_session_force属性

worker.loadbalancer.sticky_session=true
此处指定集群是否需要会话复制,如果设为true,则表明为会话粘性,不进行会话复制,当某用户的请求第一次分发到哪台Tomcat后,后继的请求会一直分发到此Tomcat服务器上处理;如果设为false,则表明需求会话复制。
worker.loadbalancer.sticky_session_force=true
如果上面的sticky_session设为true时,建议此处也设为true,此参数表明如果集群中某台Tomcat服务器在多次请求没有响应后,是否将当前的请求,转发到其它Tomcat服务器上处理;此参数在sticky_session=true时,影响比较大,会导致转发到其它Tomcat服务器上的请求,找不到原来的session,所以如果此时请求中有读取session中某些信息的话,就会导致应用的null异常

3.tomcat的配置

以tomcat5.5为例,6.0及7.0或更高版本配置请参考tomcat文档

/docs/cluster-howto.html (simple how to)

/docs/config/cluster.html (reference documentation)

server.xml完整代码,如果是本机运行多个tomcat,请修改相应端口。防止冲突。请参考上面的workers.properties文件修改

<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements.  See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License.  You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Example Server Configuration File -->
<!-- Note that component elements are nested corresponding to their
parent-child relationships with each other -->
<!-- A "Server" is a singleton element that represents the entire JVM,
which may contain one or more "Service" instances.  The Server
listens for a shutdown command on the indicated port.
Note:  A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" or "Loggers" at this level.
-->
<Server port="8005" shutdown="SHUTDOWN">
<!-- Comment these entries out to disable JMX MBeans support used for the
administration web application -->
<Listener className="org.apache.catalina.core.AprLifecycleListener" />
<Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
<!-- Global JNDI resources -->
<GlobalNamingResources>
<!-- Test entry for demonstration purposes -->
<Environment name="simpleValue" type="java.lang.Integer" value="30"/>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users -->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" (and therefore the web applications visible
within that Container).  Normally, that Container is an "Engine",
but this is not required.
Note:  A "Service" is not itself a "Container", so you may not
define subcomponents such as "Valves" or "Loggers" at this level.
-->
<!-- Define the Tomcat Stand-Alone Service -->
<Service name="Catalina">
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned.  Each Connector passes requests on to the
associated "Container" (normally an Engine) for processing.
By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
You can also enable an SSL HTTP/1.1 Connector on port 8443 by
following the instructions below and uncommenting the second Connector
entry.  SSL support requires the following steps (see the SSL Config
HOWTO in the Tomcat 5 documentation bundle for more detailed
instructions):
* If your JDK version 1.3 or prior, download and install JSSE 1.0.2 or
later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
* Execute:
%JAVA_HOME%/bin/keytool -genkey -alias tomcat -keyalg RSA (Windows)
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA  (Unix)
with a password value of "changeit" for both the certificate and
the keystore itself.
By default, DNS lookups are enabled when a web application calls
request.getRemoteHost().  This can have an adverse impact on
performance, so you can disable it by setting the
"enableLookups" attribute to "false".  When DNS lookups are disabled,
request.getRemoteHost() will return the String version of the
IP address of the remote client.
-->
<!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
<Connector port="8080" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />
<!-- Note : To disable connection timeouts, set connectionTimeout value
to 0 -->

<!-- Note : To use gzip compression you could set the following properties :

compression="on"
compressionMinSize="2048"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml"
-->
<!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
<!--
<Connector port="8443" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" disableUploadTimeout="true"
acceptCount="100" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
<!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
<!-- See proxy documentation for more information about using this. -->
<!--
<Connector port="8082"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" acceptCount="100" connectionTimeout="20000"
proxyPort="80" disableUploadTimeout="true" />
-->
<!-- An Engine represents the entry point (within Catalina) that processes
every request.  The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host). -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2"/>
<!-- Define the top level container in our container hierarchy -->
<Engine name="Catalina" defaultHost="localhost">
<!-- The request dumper valve dumps useful debugging information about
the request headers and cookies that were received, and the response
headers and cookies that were sent, for all requests received by
this instance of Tomcat.  If you care only about requests to a
particular virtual host, or a particular application, nest this
element inside the corresponding <Host> or <Context> entry instead.
For a similar mechanism that is portable to all Servlet 2.4
containers, check out the "RequestDumperFilter" Filter in the
example application (the source for this filter may be found in
"$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
Note that this Valve uses the platform's default character encoding.
This may cause problems for developers in another encoding, e.g.
UTF-8.  Use the RequestDumperFilter instead.
Also note that enabling this Valve will write a ton of stuff to your
logs.  They are likely to grow quite large.  This extensive log writing
will definitely slow down your server.
Request dumping is disabled by default.  Uncomment the following
element to enable it. -->
<!--
<Valve className="org.apache.catalina.valves.RequestDumperValve"/>
-->
<!-- Because this Realm is here, an instance will be shared globally -->
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase".  Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm.  -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
<!-- Comment out the old realm but leave here for now in case we
need to go back quickly -->
<!--
<Realm className="org.apache.catalina.realm.MemoryRealm" />
-->
<!-- Replace the above Realm with one of the following to get a Realm
stored in a database and accessed via JDBC -->
<!--
<Realm  className="org.apache.catalina.realm.JDBCRealm"
driverName="org.gjt.mm.mysql.Driver"
connectionURL="jdbc:mysql://localhost/authority"
connectionName="test" connectionPassword="test"
userTable="users" userNameCol="user_name" userCredCol="user_pass"
userRoleTable="user_roles" roleNameCol="role_name" />
-->
<!--
<Realm  className="org.apache.catalina.realm.JDBCRealm"
driverName="oracle.jdbc.driver.OracleDriver"
connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
connectionName="scott" connectionPassword="tiger"
userTable="users" userNameCol="user_name" userCredCol="user_pass"
userRoleTable="user_roles" roleNameCol="role_name" />
-->
<!--
<Realm  className="org.apache.catalina.realm.JDBCRealm"
driverName="sun.jdbc.odbc.JdbcOdbcDriver"
connectionURL="jdbc:odbc:CATALINA"
userTable="users" userNameCol="user_name" userCredCol="user_pass"
userRoleTable="user_roles" roleNameCol="role_name" />
-->
<!-- Define the default virtual host
Note: XML Schema validation will not work with Xerces 2.2.
-->
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">
<!-- Defines a cluster for this node,
By defining this element, means that every manager will be changed.
So when running a cluster, only make sure that you have webapps in there
that need to be clustered and remove the other ones.
A cluster has the following parameters:
className = the fully qualified name of the cluster class
clusterName = a descriptive name for your cluster, can be anything
mcastAddr = the multicast address, has to be the same for all the nodes
mcastPort = the multicast port, has to be the same for all the nodes

mcastBindAddress = bind the multicast socket to a specific address

mcastTTL = the multicast TTL if you want to limit your broadcast

mcastSoTimeout = the multicast readtimeout
mcastFrequency = the number of milliseconds in between sending a "I'm alive" heartbeat
mcastDropTime = the number a milliseconds before a node is considered "dead" if no heartbeat is received
tcpThreadCount = the number of threads to handle incoming replication requests, optimal would be the same amount of threads as nodes
tcpListenAddress = the listen address (bind address) for TCP cluster request on this host,
in case of multiple ethernet cards.
auto means that address becomes
InetAddress.getLocalHost().getHostAddress()
tcpListenPort = the tcp listen port
tcpSelectorTimeout = the timeout (ms) for the Selector.select() method in case the OS
has a wakup bug in java.nio. Set to 0 for no timeout
printToScreen = true means that managers will also print to std.out
expireSessionsOnShutdown = true means that
useDirtyFlag = true means that we only replicate a session after setAttribute,removeAttribute has been called.
false means to replicate the session after each request.
false means that replication would work for the following piece of code: (only for SimpleTcpReplicationManager)
<%
HashMap map = (HashMap)session.getAttribute("map");
map.put("key","value");
%>
replicationMode = can be either 'pooled', 'synchronous' or 'asynchronous'.
* Pooled means that the replication happens using several sockets in a synchronous way. Ie, the data gets replicated, then the request return. This is the same as the 'synchronous' setting except it uses a pool of sockets, hence it is multithreaded. This is the fastest and safest configuration. To use this, also increase the nr of tcp threads that you have dealing with replication.
* Synchronous means that the thread that executes the request, is also the
thread the replicates the data to the other nodes, and will not return until all
nodes have received the information.
* Asynchronous means that there is a specific 'sender' thread for each cluster node,
so the request thread will queue the replication request into a "smart" queue,
and then return to the client.
The "smart" queue is a queue where when a session is added to the queue, and the same session
already exists in the queue from a previous request, that session will be replaced
in the queue instead of replicating two requests. This almost never happens, unless there is a
large network delay.
-->
<!--
When configuring for clustering, you also add in a valve to catch all the requests
coming in, at the end of the request, the session may or may not be replicated.
A session is replicated if and only if all the conditions are met:
1. useDirtyFlag is true or setAttribute or removeAttribute has been called AND
2. a session exists (has been created)
3. the request is not trapped by the "filter" attribute
The filter attribute is to filter out requests that could not modify the session,
hence we don't replicate the session after the end of this request.
The filter is negative, ie, anything you put in the filter, you mean to filter out,
ie, no replication will be done on requests that match one of the filters.
The filter attribute is delimited by ;, so you can't escape out ; even if you wanted to.
filter=".*/.gif;.*/.js;" means that we will not replicate the session after requests with the URI
ending with .gif and .js are intercepted.

The deployer element can be used to deploy apps cluster wide.
Currently the deployment only deploys/undeploys to working members in the cluster
so no WARs are copied upons startup of a broken node.
The deployer watches a directory (watchDir) for WAR files when watchEnabled="true"
When a new war file is added the war gets deployed to the local instance,
and then deployed to the other instances in the cluster.
When a war file is deleted from the watchDir the war is undeployed locally
and cluster wide
-->

<!--Cluster 集群配置,打开注释即可,如果在本机运行多个tomcat请修改 tcpListenPort 端口-->
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
managerClassName="org.apache.catalina.cluster.session.DeltaManager"
expireSessionsOnShutdown="false"
useDirtyFlag="true"
notifyListenersOnReplication="true">
<Membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
<Receiver
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4001"
tcpSelectorTimeout="100"
tcpThreadCount="6"/>
<Sender
className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
replicationMode="pooled"
ackTimeout="15000"
waitForAck="true"/>
<Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
filter=".*/.gif;.*/.js;.*/.jpg;.*/.png;.*/.htm;.*/.html;.*/.css;.*/.txt;"/>

<Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>

<ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
</Cluster>

<!-- Normally, users must authenticate themselves to each web app
individually.  Uncomment the following entry if you would like
a user to be authenticated the first time they encounter a
resource protected by a security constraint, and then have that
user identity maintained across *all* web applications contained
in this virtual host. -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all requests for this virtual host.  By
default, log files are created in the "logs" directory relative to
$CATALINA_HOME.  If you wish, you can specify a different
directory with the "directory" attribute.  Specify either a relative
(to $CATALINA_HOME) or absolute path to the desired directory.
-->
<!--
<Valve className="org.apache.catalina.valves.AccessLogValve"
directory="logs"  prefix="localhost_access_log." suffix=".txt"
pattern="common" resolveHosts="false"/>
-->
<!-- Access log processes all requests for this virtual host.  By
default, log files are created in the "logs" directory relative to
$CATALINA_HOME.  If you wish, you can specify a different
directory with the "directory" attribute.  Specify either a relative
(to $CATALINA_HOME) or absolute path to the desired directory.
This access log implementation is optimized for maximum performance,
but is hardcoded to support only the "common" and "combined" patterns.
-->
<!--
<Valve className="org.apache.catalina.valves.FastCommonAccessLogValve"
directory="logs"  prefix="localhost_access_log." suffix=".txt"
pattern="common" resolveHosts="false"/>
-->
</Host>
</Engine>
</Service>
</Server>


关于tomcat,你可以复制多份进行测试,也可以修改server.xml,添加多个实例进行运行,但是为了更直观的查看建议复制多份。

修改tomcat的Context.xml文件 打开全局session复制

<Context distributable="true">
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐