Saturday, March 24, 2012

ATG ServerLockManager


You should also configure one or more ATG servers to start the /atg/dynamo/service/ServerLockManager on application startup. To do this, add the ServerLockManager to the initialServices property of /atg/dynamo/service/Initial in the server-specific configuration layer for the server in which you’ve chosen to run a ServerLockManager. For example, if you wanted to run the ServerLockManager in a ATG server instance named derrida, you could add this properties file at

<ATG2007.3dir>/home/servers/derrida/localconfig/atg/dynamo/service/Initial.properties:

#server/derrida
#/localconfig/atg/dynamo/service/Initial.properties:


initialServices+=ServerLockManager
ServerLockManager Failover

You can configure more than one ServerLockManager. One ServerLockManager acts as the primary lock server while the other acts as backup. If the primary ServerLockManager fails, then the backup ServerLockManager takes over and clients will begin to send lock requests to the backup. If both ServerLockManagers fail, caching is simply disabled. Under that condition, the site still functions, but just slower since it must access the database more frequently rather than using the cache. The cache mode also switches into disabled mode for all transactions that are unable to obtain the lock. Once a ServerLockManager is restored, caching resumes.


For example, if you have two ServerLockManager components named tartini and corelli, each running on port 9010, they could be configured like this:

# tartini:9010
$class=atg.service.lockmanager.ServerLockManager
handlerCount=0
port=9010
otherLockServerAddress=corelli
otherLockServerPort=9010
otherServerPollInterval=2000
waitTimeBeforeSwitchingFromBackup=10000
# corelli:9010


$class=atg.service.lockmanager.ServerLockManager
handlerCount=0
port=9010
otherLockServerAddress=tartini
otherLockServerPort=9010
otherServerPollInterval=2000
waitTimeBeforeSwitchingFromBackup=10000



It is best if the primary ServerLockManager runs in a ATG instance that does not also handle user sessions by running a DrpServer. Not only does this prevent the load on the ServerLockManager from affecting user sessions, but it also lets you stop and restart the DrpServer without restarting the ServerLockManager. If you find that there is enough lock contention on your site that the lock server itself becomes a bottleneck, then you might choose to create separate lock servers for different repositories to distribute the load. Note that in this situation, the lock server will be unable to detect deadlocks that span lock servers. In this situation, you will need a separate ClientLockManager instance in each ATG instance to refer to each ServerLockManager.

No comments:

Popular Posts