Author |
|
manishsharma
Joined: 08 Jun 2012 Posts: 3 Location: india
|
Posted: Mon 11 Jun '12 9:04 Post subject: Apache Tomcat integration issue |
|
|
Hi,
I integrated apache and two tomcat servers, tomcatA as active and tomcatB as passive using the procedure and instructions mentioned on apache official website. Now if active server is down, then requests are forwarded to passive server but problem is that now even if the active server is up even then the request is forwarded to passive server rather than active server.
workers.properties
Code: | workers.tomcat_home=/var/lib/apache-tomcat-6.0.35
workers.java_home=$JAVA_HOME
ps=/
worker.list=router
worker.router.type=lb
worker.router.balance_workers=tomcatA,tomcatB
worker.router.sticky_session=false
# Define the first member worker
worker.tomcatA.type=ajp13
worker.tomcatA.host=myhost1
worker.tomcatA.port=8009
worker.tomcatA.lbfactor=500
# Define the second member worker
worker.tomcatB.type=ajp13
worker.tomcatB.host=myhost2
worker.tomcatB.port=8109
worker.tomcatB.lbfactor=200 |
httpd.conf
Code: | JkMount /DEMO router
JkMount /DEMO/* router[code][/code] |
|
|
Back to top |
|
James Blond Moderator
Joined: 19 Jan 2006 Posts: 7371 Location: Germany, Next to Hamburg
|
Posted: Mon 11 Jun '12 11:36 Post subject: |
|
|
As far as I know that is what sticky session is for, what you disabled.
Also I remember that the balancer should be like the following, but I'm not a Tomcat person, so router can be correct.
Code: |
worker.list=tomcatA,tomcatB,loadbalancer
worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=tomcatA,tomcatB
worker.loadbalancer.sticky_session=true
|
Server1: conf/server.xml
Code: |
<Engine jvmRoute="tomcatA" name="Standalone" defaultHost="localhost" debug="0">
|
Server2: conf/server.xml
Code: |
<Engine jvmRoute="tomcatB" name="Standalone" defaultHost="localhost" debug="0">
|
The jvmRoute value has to be the same name as the worker in the worker.properties file.
All that alone is not enough. You need also a In-Memory Session Replikation
server.xml
Code: |
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
managerClassName="org.apache.catalina.cluster.session.DeltaManager"
expireSessionsOnShutdown="false"
useDirtyFlag="true"
notifyListenersOnReplication="true">
<Membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
<Receiver
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4001"
tcpSelectorTimeout="100"
tcpThreadCount="6"/>
<Sender
className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
replicationMode="pooled"
ackTimeout="15000"/>
<Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/>
<Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
</Cluster>
|
This is only from RTFM a not experience. Please do also a RTFM |
|
Back to top |
|
mwu
Joined: 25 Mar 2012 Posts: 13
|
Posted: Mon 11 Jun '12 13:15 Post subject: |
|
|
You can configure a "failover node" and define the "failover node" as not active on startup:
Code: |
worker.maintain=60
worker.list=lb1and2
worker.lbworker1.port=8009
worker.lbworker1.host=192.168.99.1
worker.lbworker1.type=ajp13
worker.lbworker1.lbfactor=1
worker.lbworker1.socket_keepalive=True
# Define preferred failover node for lbworker1
worker.lbworker1.redirect=lbworker2
worker.lbworker2.port=8009
worker.lbworker2.host=192.168.99.2
worker.lbworker2.type=ajp13
worker.lbworker2.lbfactor=1
worker.lbworker2.socket_keepalive=True
# Disable lbworker2 for all requests except failover
worker.lbworker2.activation=d
# Define the LB worker
worker.lb1and2.type=lb
worker.lb1and2.balance_workers=lbworker1,lbworker2
|
Michael |
|
Back to top |
|
manishsharma
Joined: 08 Jun 2012 Posts: 3 Location: india
|
Posted: Mon 18 Jun '12 10:37 Post subject: |
|
|
Thanks for the reply guys.
@JAMES BOND
sticky session is to remember which server your system was accessing last time, I disabled it to check whether it make any difference.
I have the same code as you have mentioned except the 'server.xml' part and before I make changes in that file, I would like to ask you the significance of the code inside 'cluster' tag.
I think its related to moving the session from one server to another if the former goes down. Am I right?
@Michael
I have already tried the part you have mentioned but the problem is still there. This works fine in case of failover, same as the code I have mentioned, but the problem occur when the user try to access the server for the second time when both servers are up. This request is again handled by the secondary server rather than the primary.
Any more suggestions or changes that I can make to handle this are welcome.
Manish |
|
Back to top |
|
James Blond Moderator
Joined: 19 Jan 2006 Posts: 7371 Location: Germany, Next to Hamburg
|
|
Back to top |
|
manishsharma
Joined: 08 Jun 2012 Posts: 3 Location: india
|
Posted: Mon 18 Jun '12 18:31 Post subject: |
|
|
@James Bond
I checked both the links,
first is same as given at 'apache' website.
second is useful for knowledge but has nothing to do with my issue.
As you said you are good at googling, then can you try and find a way to always send the request to the primary tomcat first and if its down then forward to the backup server, but once primary is up then request should go back to primary and NOT to the backup.
|
|
Back to top |
|
James Blond Moderator
Joined: 19 Jan 2006 Posts: 7371 Location: Germany, Next to Hamburg
|
Posted: Mon 18 Jun '12 22:43 Post subject: |
|
|
I don't know how it works with mod_jk, but with mod_proxy_ajp you can use apache as the loadbalancer. Than apache will do the managing sessions and the other stuff.
needs mod_proxy and mod_proxy_ajp. That is a config I run with jenkins and it works fine for me.
Code: |
# map to cluster with session affinity (sticky sessions)
ProxyStatus On
ProxyPass /balancer !
ProxyPass / balancer://mycluster/ stickysession=jsessionid nofailover=On
ProxyPassReverse / balancer://mycluster stickysession=jsessionid nofailover=On
<Proxy balancer://mycluster>
BalancerMember ajp://tomcat1:8009 route=tomcat1
BalancerMember ajp://tomcat2:8009 route=tomcat2
BalancerMember ajp://tomcat3:8009 route=tomcat3
</Proxy>
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
|
Proxy balancer:// - defines the nodes (workers) in the cluster. Each member may be a http:// or ajp:// URL or another balancer:// URL for cascaded load balancing configuration.
If the worker name is not set for the tomcat servers, then session affinity (sticky sessions) will not work. The JSESSIONID cookie must have the format <sessionID>.<worker name>, in which worker name has the same value as the route specified in the BalancerMember above (in this case "tomcat1" and "tomcat2" and "tomcat3"). See this article for details. The following can be added to the jetty-web.xml in the WEB-INF directory to set the worker name.
Code: |
<Configure class="org.mortbay.jetty.webapp.WebAppContext">
<Get name="sessionHandler">
<Get name="sessionManager">
<Call name="setIdManager">
<Arg>
<New class="org.mortbay.jetty.servlet.HashSessionIdManager">
<Set name="WorkerName">jetty1</Set>
</New>
</Arg>
</Call>
</Get>
</Get>
</Configure>
|
Due a bug, this works sometimes only if the tomcat is not running on a same machine. See bug 52402 |
|
Back to top |
|
Qmpeltaty
Joined: 06 Feb 2008 Posts: 182 Location: Poland
|
Posted: Wed 20 Jun '12 15:12 Post subject: |
|
|
Your loadbalancer behave stict to your mod_jk configuration which stands : "divide request between two nodes evently " - lbfactor=1 parameter is responsible for that.
Code: | Only used for a member worker of a load balancer.
The integer number lbfactor (load-balancing factor) is how much we expect this worker to work, or the worker's work quota. Load balancing factor is compared with other workers that makes the load balancer. For example if one worker has lb_factor 5 times higher then other worker, then it will receive five times more requests.
|
I'm only 5 years experienced with mod_jk, but i belive that your failover expectation can't be fulfilled by mod_jk. |
|
Back to top |
|
mwu
Joined: 25 Mar 2012 Posts: 13
|
Posted: Wed 20 Jun '12 21:35 Post subject: |
|
|
Hi Manish
I had a maintenance window, so I was able to test my productive mod_jk configuration with "disabled failover node (worker.lbworker2.activation=d)".
Everyting is working as expected:
- shutdown tomcat1
- mod_jk is connecting to failover tomcat2
- user has to relogin
- start tomcat1
- after the worker.maintain time (default 60s), mod_jk is connecting to tomcat1
- user has to relogin
- the dead session on tomcat2 will timeout after 30min
I'm using the 32Bit VC10 Apache 2.4.2 with VC10 mod_jk from apachelounge on a Windows Web Server 2008 R2:
Apache/2.4.2 (Win32) OpenSSL/1.0.1c PHP/5.3.14 mod_jk/1.2.37
Tomcat1 is running on a Windows Server Web 2008 (no R2):
Apache Tomcat/7.0.28 64Bit with Java 1.7.0_05-b05 64Bit
Tomcat2 is running on a Windows Web Server 2008 R2:
Apache Tomcat/7.0.28 64Bit with 1.7.0_05-b05 64Bit
Michael |
|
Back to top |
|