Monday, March 26, 2012

ATG Paging Search Results


An individual search query can return a large number of results. Rather than displaying all of the results on
a single page, you will typically want to break them up into multiple pages, with a certain number of
items per page. Therefore, the query includes properties that you can use to specify the number of items
to include per page and which page to display.
This chapter describes these pagination properties and their effects. It includes the following topics:


Specifying the Page Size
Handling Page Requests
Types of Paging
Modifying and Resubmitting the Request


Specifying the Page Size


 You use the pageSize request attributes to specify the number of items per page. For example, if
pageSize=10, the results will include ten items per page.
The following JSP fragment creates a drop-down for selecting a value for the pageSize attribute:

Page Size
<dsp:select bean="${FH}.searchRequest.pageSize"> 
  <dsp:option value="100">100</dsp:option> 
  <c:forEach begin="5" end="35" step="5" var="pageSize"> 
    <dsp:option value="${pageSize}"> 
      <c:out value="${pageSize}"/> 
    </dsp:option> 
  </c:forEach> 
</dsp:select> 


Handling Page Requests



When you render the initial page of results, you typically want to render links to other pages of results.
Each of these links actually issues a new search query that in most respects is identical to the original

query, but which specifies a different page of results. So paging involves a sequence of connected
requests and responses.
To display a specific page of results, you set the form handler’s goToPage property to a 1-based page
number. The goToPage property has an associated handler method, handleGoToPage. When a user
clicks a link that sets the value of goToPage, the handleGoToPage method issues the search for the
specified page.
To ensure that all of the requests and responses in a sequence of page requests are associated with each
other, the initial query generates a unique String identifier called a request chain token. This identifier is
included in each query in the sequence of requests, and is returned in each response.
There are various paging options available, and the ones you use depend on the needs or your site. The
following options are explained below:

· The type of paging to use: normal paging or fast paging
· Whether to save the request in the search session or not


Types of Paging 



ATG Search supports two types of paging, normal paging and fast paging. The key differences between
them relate to the information you get back from the search engine about the number of pages of results,
and the navigation you can build into your pages:

· Normal paging is the default. In this mode, the search engine returns (in the form
handler’s pagesAvailable property) the total number of pages of results. You can
create links that enable the customer to go directly to any page.

· Fast paging is specified by setting the fastPaging property of the search request to
true. In this mode, the search engine does not return information about the total
number of pages of results; pagesAvailable is set to the highest-numbered page
that has been rendered so far. You can enable customers to go to the next page or to
any page previously rendered.

For a single-partition index, normal paging is always enabled (the fastPaging property is ignored). For a
multi-partition index, you can choose between normal paging and fast paging, but fast paging is
recommended. Fast paging is much less resource-intensive than normal paging. On multi-partition
indexes, normal paging can be very memory- and CPU-intensive, because results from the partitions must
be merged.

Example of Normal Paging :


The following example renders a list of the page numbers of all pages of results. Each page number is a
link to the corresponding results page, except for the current page number, which is displayed without a
link. For example, if pagesAvailable is 43 and the current page is page 10, this code will render the
integers from 1 to 43, and all of these except 10 will be links. If the user clicks 22, for example, a request to
display page 22 will be issued.


<!-- Indicate that the request should be saved in the search 
     session so that initial request data, such as the question 
     text, is available to subseqent paged requests. --> 
<dsp:input bean="QueryFormHandler.searchRequest.saveRequest" 
           value="true" type="hidden"/> 
<!-- Display page numbers with links to take user to specified 
     page --> 
Go to Page: 
<c:forEach var="page" begin="1" end="${formHandler.pagesAvailable}"> 
  <c:choose> 
    <c:when test="${page == (1+formHandler.searchResponse.pageNum)}"> 
      ${page} <!-- The current page, don't display a link --> 
    </c:when> 
    <c:otherwise> 
      <dsp:a href="normal-paging.jsp"> 
        ${page} 
        <dsp:property bean="QueryFormHandler.searchRequest.requestChainToken" 
                      value="${formHandler.searchResponse.requestChainToken}"/> 
        <dsp:property bean="QueryFormHandler.searchRequest.saveRequest" 
                      value="true"/> 
        <dsp:property bean="QueryFormHandler.goToPage" value="${page}"/> 
      </dsp:a> 
    </c:otherwise> 
  </c:choose> 
</c:forEach> 


Example of Fast Paging 



The following example renders a list of the page numbers of the results pages that have been rendered so
far. Each page number is a link to the corresponding results page, except for the current page number,
which is displayed without a link. In addition, the example renders the word “more” as a link to the page
following the current one.

<!-- Turn on fast paging --> 
<dsp:input type="hidden" value="true" 
           bean="QueryFormHandler.searchRequest.fastPaging"/> 
<!-- Indicate that the request should be saved in the search 
     session so that initial request data, such as the question 
     text, is available to subseqent paged requests. --> 
<dsp:input bean="QueryFormHandler.searchRequest.saveRequest" 
           value="true" type="hidden"/> 
<!-- Shortcut to the response object, which may be null --> 
<c:set var="response" value="${formHandler.searchResponse}"/>


<!-- Display page numbers with links to take user to specified 
     page --> 
<c:if test="${response != null}"> 
  on page: ${1+response.pageNum}<br/> 
  Go to Page: 
  <c:forEach var="page" begin="1" end="${1+formHandler.pagesAvailable}"> 
    <c:choose> 
      <c:when test="${page == (1+response.pageNum)}"> 
        ${page} <!-- current page --> 
      </c:when> 
      <c:otherwise> 
        <dsp:a href="fast-paging.jsp"> 
          <c:choose> 
            <c:when test="${page == (1+formHandler.pagesAvailable) && 
                            response.multiPartitionSearch && 
                            formHandler.searchRequest.fastPaging}"> 
              more 
            </c:when> 
            <c:otherwise> 
              ${page} 
            </c:otherwise> 
          </c:choose> 
          <dsp:property bean="QueryFormHandler.searchRequest.requestChainToken" 
                        value="${formHandler.searchResponse.requestChainToken}"/> 
          <dsp:property bean="QueryFormHandler.searchRequest.saveRequest" 
                        value="true"/> 
          <dsp:property bean="QueryFormHandler.goToPage" value="${page}"/> 
        </dsp:a> 
      </c:otherwise> 
    </c:choose> 
  </c:forEach> 
</c:if>


Modifying and Resubmitting the Request:



Since subsequent requests differ only in the requested page of results, it is most efficient just to retrieve
the most recent search request, change the value of the goToPage property, and resubmit the request.
There are two ways to do this:

· Modify properties on the form, and resubmit it. This avoids the memory use required
to save the request in the SearchSession. The downside is that resubmitting the
form is difficult if you are creating your links through anchor tags. In that case, it is
generally easiest to write a JavaScript function that makes the necessary changes and
submits the form.

· Save the request in the SearchSession. This allows you to retrieve the request,
modify it, and reissue it; no JavaScript is necessary. The downside is that this approachcan
use a lot of memory, especially if there are many users at your site issuing search
queries.

Note that resubmitting a modified request is useful for faceted search as well as for paging.


Example of Resubmitting the Form:


If you do not want to save the request in the SearchSession, you will need to resubmit the form. Create
a JavaScript function like this:

function nextPage(pageNum, requestChainToken) 

  document.searchForm.requestChainToken.value = requestChainToken; 
  document.searchForm.goToPage.value = pageNum; 
  document.searchForm.submit(); 
  return false; 


You can then invoke the function when the user clicks on a link for a specific page:

<a href="#" onclick="return nextPage('<%=pageValue.toString()%>', 
 '${formHandler.searchResponse.requestChainToken}');"> 
  <dsp:valueof param="count"/> 
</a> 

When the link is clicked, the page number associated with the link and the requestChainToken of the
current search response are passed to the function. The function uses these values to set the goToPage
property and the requestChainToken property of the form, which it then submits. In addition to
specifying the results page to display, this ensures that the same requestChainToken value is associated
with each subsequent search request.

Example of Saving the Request in the SearchSession


If you save the request in the SearchSession, you can avoid the use of JavaScript. Instead, when a user
clicks on a link for a page, you set the necessary properties (including the saveRequest property) on the
saved request through dsp:property tags, and then resubmit the request:

<dsp:a href="queryExampleFastSave.jsp#Paging"> 
  <dsp:valueof param="count"/> 
  <dsp:property bean="QueryFormHandler.goToPage" paramvalue="count" 
    name="fh_gtp" priority="29"/> 
  <dsp:property bean="QueryFormHandler.searchRequest.saveRequest" 
    value="true" name="fh_sr" priority="30"/> 
  <dsp:property 
    bean="QueryFormHandler.searchRequest.requestChainToken" 
value="${formHandler.searchResponse.requestChainToken}" 
    name="fh_rct" priority="30"/> 
</dsp:a> 


Sunday, March 25, 2012

ATG Cache Flushing


You can flush (invalidate) the caches for an item descriptor or an entire SQL repository, using the following methods. Note that first, you must cast your atg.repository.RepositoryItemDescriptor to an atg.repository.ItemDescriptorImpl. If you are using distributed cache mode, use the Cache Invalidator, as described in the Cache Invalidation Service section below.

The methods in atg.repository.ItemDescriptorImpl are:

These methods also have versions that accept a boolean parameter that indicates whether the cache should be changed globally, or just for the local cache. These methods are:

removeItemFromCache(id, boolean pGlobal)
invalidateCaches(boolean pGlobal)
invalidateItemCache(boolean pGlobal)

If this global parameter is true, the invalidation occurs across the cluster. Otherwise, the invalidation occurs only in the local ATG instance.


The removeItemFromCache method, when given a true value, will use one of two mechanisms to distribute the invalidation event:

1.If the item descriptor uses distributed cache mode, it uses the event server to send the invalidation event.

2.Otherwise, it uses the GSAInvalidatorService to send the event.

The invalidateCaches and invalidateItemCache methods, when given true for the global parameter, will always use the GSAInvalidatorService. If this service is not enabled, a warning is logged and the cache is only invalidated locally.

This method in atg.repository.RepositoryImpl affects all caches in the repository:


invalidateCaches()
Invalidates all caches in this repository.



You can cast your repository to these classes and call these methods from there. You can both flush items of a specific kind, items and queries of a specific kind or a specific item with these methods.

For example, here is how you might use the invalidateItemCache() method to invalidate the item caches for every item descriptor in a repository:

RepositoryImpl rep = getRepository();
  String[] descriptorNames = getItemDescriptorNames();
// iterate over all the descriptors
for (int i=0; i<descriptorNames.length; i++) {
    String name = descriptorNames[i];
    ItemDescriptorImpl d = (ItemDescriptorImpl)rep.getItemDescriptor(name);
    d.invalidateItemCache();
}

Saturday, March 24, 2012

ATG ServerLockManager


You should also configure one or more ATG servers to start the /atg/dynamo/service/ServerLockManager on application startup. To do this, add the ServerLockManager to the initialServices property of /atg/dynamo/service/Initial in the server-specific configuration layer for the server in which you’ve chosen to run a ServerLockManager. For example, if you wanted to run the ServerLockManager in a ATG server instance named derrida, you could add this properties file at

<ATG2007.3dir>/home/servers/derrida/localconfig/atg/dynamo/service/Initial.properties:

#server/derrida
#/localconfig/atg/dynamo/service/Initial.properties:


initialServices+=ServerLockManager
ServerLockManager Failover

You can configure more than one ServerLockManager. One ServerLockManager acts as the primary lock server while the other acts as backup. If the primary ServerLockManager fails, then the backup ServerLockManager takes over and clients will begin to send lock requests to the backup. If both ServerLockManagers fail, caching is simply disabled. Under that condition, the site still functions, but just slower since it must access the database more frequently rather than using the cache. The cache mode also switches into disabled mode for all transactions that are unable to obtain the lock. Once a ServerLockManager is restored, caching resumes.


For example, if you have two ServerLockManager components named tartini and corelli, each running on port 9010, they could be configured like this:

# tartini:9010
$class=atg.service.lockmanager.ServerLockManager
handlerCount=0
port=9010
otherLockServerAddress=corelli
otherLockServerPort=9010
otherServerPollInterval=2000
waitTimeBeforeSwitchingFromBackup=10000
# corelli:9010


$class=atg.service.lockmanager.ServerLockManager
handlerCount=0
port=9010
otherLockServerAddress=tartini
otherLockServerPort=9010
otherServerPollInterval=2000
waitTimeBeforeSwitchingFromBackup=10000



It is best if the primary ServerLockManager runs in a ATG instance that does not also handle user sessions by running a DrpServer. Not only does this prevent the load on the ServerLockManager from affecting user sessions, but it also lets you stop and restart the DrpServer without restarting the ServerLockManager. If you find that there is enough lock contention on your site that the lock server itself becomes a bottleneck, then you might choose to create separate lock servers for different repositories to distribute the load. Note that in this situation, the lock server will be unable to detect deadlocks that span lock servers. In this situation, you will need a separate ClientLockManager instance in each ATG instance to refer to each ServerLockManager.

ATG ClientLockManager


For each SQL repository that contains any item descriptors with cache-mode="locked", you must set the lockManager property of the Repository component to refer to a ClientLockManager. ATG comes configured with a default client lock manager, which you can use for most purposes:

lockManager=/atg/dynamo/service/ClientLockManager

When you first install the ATG platform, the ClientLockManager component has its useLockServer property set to false, which disables use of the lock server. In order to use locked mode repository caching, you must set this property to true. This setting is included in the ATG platform liveconfig configuration layer, so you can set the useLockServer property by adding the liveconfig configuration layer to the environment for all your ATG servers. You must also set the lockServerPort and lockServerAddress properties to match the port and host of your ServerLockManagers components. For example, suppose you have two ServerLockManagers, one running on host tartini and port 9010 and the other running on host corelli and port 9010. You would configure the ClientLockManager like this:


$class=atg.service.lockmanager.ClientLockManager

lockServerAddress=tartini,corelli

lockServerPort=9010,9010

useLockServer=true

Wednesday, March 21, 2012

Defining and Detecting Abandoned Orders : ATG Commerce

Defining Abandoned and Lost Orders

By default you can define what constitutes an abandoned and lost order using the following criteria:

number of idle days

minimum amount (optional)

You set these criteria for abandoned and lost orders in the following properties of the /atg/commerce/order/abandoned/AbandonedOrderService component:

idleDaysUntilAbandoned

idleDaysUntilLost

minimumAmount

For the default values of these properties see Configuring AbandonedOrderService later in this chapter. Note that an amount specified in the AbandonedOrderService.minimumAmount property is used as a criterion when detecting both abandoned and lost orders.

You may want to define different types of abandoned or lost orders. For example, you may want to differentiate between high-priced and low-priced abandoned orders in order to respond differently to each type. For information on this type of customization, see Customizations and Extensions in this chapter.



Detecting Abandoned and Lost Orders

The /atg/commerce/order/abandoned/AbandonedOrderService not only defines what constitutes an abandoned or lost order, but also queries the order repository for these types of orders according to the schedule that you specify in its schedule property. The default schedule is “every day at 3:00 AM.”

When an AbandonedOrderService job is run, the service queries the order repository for both abandoned and lost orders. The following table lists the criteria orders must meet to be identified as abandoned or lost:


Criteria for Identification as “Abandoned” Criteria for Identification as “Lost”
The order’s state matches one in the AbandonedOrderTools.abandonableOrderStatesproperty. Same
The order has no associated abandonmentInfo item or its abandonment state is REANIMATED. In other words, the order is either newly abandoned or re-abandoned. The order has no associated abandonmentInfo item or its abandonment state is not LOST. In other words, the order is newly lost.
The order has been idle for the number of days specified in theAbandonedOrderService.idleDaysUntilAbandoned property. The order has been idle for the number of days specified in theAbandonedOrderService.idleDaysUntilLost property.
The order’s subtotal is greater than or equal to the amount specified in theAbandonedOrderService.minimumAmount property, if set.
Same






See Configuring AbandonedOrderTools and Configuring AbandonedOrderService for information on setting the properties referenced in the table above.

For each abandoned order found, the AbandonedOrderService does the following:

Adds the order to the list of abandoned orders in the user’s abandonedOrders profile property.

If necessary, creates an abandonmentInfo item for the order; then updates the item with the relevant information:

The state property is set to ABANDONED.

The abandonmentDate property is set to the current date and time.

If the abandonmentInfo item is new, the abandonmentCount property is set to 1. Otherwise, it is incremented.

Fires an OrderAbandoned message if the AbandonedOrderTools.sendOrderAbandonedMessage property is set to true.

For each lost order found, the AbandonedOrderService does the following:

Removes the order from the list of abandoned orders in the user’s abandonedOrders profile property.

If the AbandonedOrderTools.deleteLostOrders property is set to true, the lost order is deleted from the order repository.

If the AbandonedOrderTools.leaveAbandonmentInfoForDeletedOrders property is set to true, the abandonmentInfo item for the order is updated with the relevant information:

The state property is set to LOST.

The lostDate property is set to the current date and time.

Fires an OrderLost message if the AbandonedOrderTools.sendOrderLostMessage property is set to true.

As previously mentioned, the AbandonedOrderService is a configured instance of class atg.commerce.order.abandoned.AbandonedOrderService. This class extends atg.service.scheduler.SingletonSchedulableService, which uses locking to enable multiple servers to run the same scheduled service while ensuring that only one instance performs the scheduled task at a given time.

An Overview of Abandoned Orders : ATG Commerce

Examine the following process flow diagram, which illustrates the various paths an order can take once created by a customer.









As mentioned in the introduction to this chapter, the Abandoned Order Services module contains a collection of services and tools that enable you to detect, respond to, and report on abandoned orders and related activity, that is, activity that falls within the shaded area of the diagram above. As the diagram implies, there are several general types of orders that fall within this area:

Abandoned orders – Incomplete orders that have not been checked out by customers and instead have remained idle for a duration of time.

Reanimated orders – Previously abandoned orders that have since been modified by the customer in some way, such as adding items or changing item quantities.

Converted orders – Previously abandoned orders that have been successfully checked out by the customer.

Lost orders – Abandoned orders that have been abandoned for so long that reanimation of the order is no longer considered realistic.

Note in the diagram that the process flow is not always linear. For example, an order can be abandoned, then reanimated, then abandoned again.

The subsections that follow describe the various abandonment states, repository extensions, and repositories that are required to support these orders and the tracking of related order abandonment activity:

Abandonment States

Order Repository Extensions

Profile Repository Extensions

The ConvertedOrderRepository

Saturday, March 17, 2012

OptOutFormHandler : ATG


Use the /atg/campaign/servlet/OptOutFormHandler to give users a global opt-out option for e-mail communications. This form handler sets a profile’s receiveEmail property to true (yes) or false (no), and sends an opt-out message that is used for reporting.

For detailed information on using form handlers in JSP pages, refer to the ATG Page Developer’s Guide.


OptOutFormHandler Example Test Page:

<%@ taglib uri="http://www.atg.com/taglibs/daf/dspjspTaglib1_0" prefix="dsp" %>
<%@ taglib uri="http://www.atg.com/taglibs/daf/dspjspELTaglib1_0"
    prefix="dspel" %>
<%@ page import="atg.servlet.*"%>


<dsp:page>
<dsp:importbean bean="/atg/campaign/servlet/OptOutFormHandler"/>
<dsp:importbean bean="/atg/userprofiling/Profile"/>
<html>
<head>
  <title>OptOutFormHandler test page</title>
</head>


<body>
<h3>OptOutFormHandler test page</h3>
<dsp:form action="OptOutFormHandler_test_page.jsp" method="post">
<p>Current profile: <dsp:valueof bean="Profile.firstName"/>
<dsp:valueof bean="Profile.lastName"/> (<code>receiveEmail</code> property
currently set to <strong><dsp:valueof
    bean="OptOutFormHandler.receiveEmail"/></strong>)
<p><dsp:input bean="OptOutFormHandler.receiveEmail"
    type="checkbox"/> Yes, send me e-mail!
<br>
<br>
<dsp:input bean="OptOutFormHandler.submit" type="Submit" value="Submit"/>
</dsp:form>
</body>


</html>
</dsp:page>

ATG Scheduling Data Loading


The loading of data for reports occurs on a set schedule, which is determined through each /atg/reporting/datacollection/campaign/*loader component on the data loading server. These components have a scheduler property that points to an instance of /atg/dynamo/service/Scheduler. They also have a runSchedule property that specifies when the loading should start. If you want to change the default schedule, edit the appropriate settings in the loader component.

The following example shows the relevant properties for the CampaignLoader component:


scheduler=/atg/dynamo/service/Scheduler
runSchedule=calendar * * * 21 0
stopSchedule=calendar * * * 6 0


In this case, the runSchedule setting indicates that loading should start at 9 PM every night. The loader checks repeatedly for new work to process until the stopSchedule time is reached (here, 6 AM). For more information on the values in the runSchedule and stopSchedule properties, refer to Configuring a Schedulable Component in the ATG Programming Guide.

Be aware that the loading of campaign data needs to happen before any order- or e-mail-related data is loaded so that the order and e-mail data can be attributed to the correct campaign. By default, the loader components other than CampaignLoader are scheduled to start loading data at 10 PM (one hour after the campaign data is loaded). If you change the schedule on which these activities occur, make sure you maintain this distinction.

Note also that the production database is referenced during the data loading process. For this reason, you should avoid scheduling loading activities to occur during your Web site’s busiest time of day.

Thursday, March 15, 2012

Bcc Snapshot Mismatches another type of issue : ATG BCC

In the BCC Admin Console (open BCC home > Admin Console > MySiteName), you will be able to see the current state of deployments to that site.  A snapshot mismatch is easy to see – the deployment will be halted and the following error message will be displayed below:

“Cannot perform a incremental deployment on target,  Check for snapshots mismatched on agents.”
This occurs when something has interrupted the publishing process and the DeploymentTarget (on the target we’re trying to publish to) has failed to update it’s latest snapshot id.  The snapshot id allows the DeploymentTarget to know what version of content it has received which in turns means the publishing process can verify if a project has aleady been published. Each snapshot id is mapped to a target and a project in the BCC server’s database schema.

So enough background, time to follow the steps to resolve the problem!  Firstly, we need that snapshot id and ideally, the target id so we can confirm it’s the right one.  A wise man once showed me an effective cheat to get this simply and quickly.  The links to previous projects in the site admin screen we opened contain the ids of our projects.  Therefore, copy the link for the previously deployed project and search through the params until you spot the id of the project as below:

http://<your_host>:<bcc_port>/atg/bcc/process?paf_portalId=default&paf_communityId=100001&paf_pageId=100004&paf_dm=shared&paf_gear_id=1000006&paf_gm=content&projectView=1&project=prjXXXXX&processPortlet=200002

Now, open your DB and connect to the BCC schema and execute the following statement, inserting the project id at the appropriate spot:

select * from EPUB_PRJ_TG_SNSHT where PROJECT_ID = '<id>';

You can now open the BCC dyn/admin page for the DeploymentServer component, located at the following path:

http://<your_host>:<bcc_port>/dyn/admin/nucleus/atg/epub/DeploymentServer

You should see one or more DeploymentTargets with various properties indicating their status. Below each one is an input field where you can type a snapshot id and click the “Init” button to force the snapshot.  Find the target that has a null in the snapshot field and use the form to input your snapshot id.  Assuming this works, you should now be able to return to the Production site page in the BCC Admin Console and Resume deployments.

Saturday, March 10, 2012

ATG CA: BCC UI customization for repository properties


We have a requirement to change the UI display of some of the fields
from default BCC display. 
We have Commentary and CommentaryAuthors . Each commentary can have
multiple Authors. 
We have CommentaryTypes and each commentary is of specific commentary
type.(Ex: Perspective, Insight etc.,). Below are their repository
definitions in repository XML file. Also attached the image (BCC UI
Customization.jpg) for default display of these relations in BCC UI.
 
<item-descriptor query-cache-size="10" item-cache-size="100">
  <property display-name="Commentary Type" name="commentaryType"
column-name="commentary_type_id" item-type="CommentaryTypes" >
    <attribute value="4"/>
  </property>
[snip]
    <table type="multi" id-column-names="commentary_id">
      <property column-names="author_id" data-type="set"
component-item-type="CommentaryAuthor" cascade="update">
        <attribute value="5"/>
      </property>
    </table>
</item-descriptor> 
<item-descriptor query-cache-size="10" item-cache-size="100">
    <table id-column-names="author_id">
      <property display-name="First Name" data-type="string"
column-name="first_name" >
        <attribute value="0"/>
      </property>
[snip]
    </table>
</item-descriptor> 
<item-descriptor query-cache-size="10" item-cache-size="100">
  <table id-column-names="commentary_type_id">
    <property column-name="commentary_type_id"/>
    <property display-name="Commentary Type" data-type="string"
column-name="commentary_type"/>
  </table>
</item-descriptor> 

We need to change the display of CommentaryAuthors in such a way that
we display two list boxes. In one list box we want to display all the
available Authors and the user should be able to choose multiple
Authors from this list box and move to second(target) list box. 
Or we can have only a single multi-select list box which displays all
the available Authors and we can select multiple Authors from this
list. 
For CommentaryTypes, we should be able to display all the available
commentaryTypes in a dropdown option/select list instead of the
default BCC display shown in the attached screenshot. 
Is there any straightforward way of doing these UI customizations of
item display in the BCC? I know we can customize this using
ViewMappings in ACC. But there are lot of propertyViews available in
ACC View Mapping repository and there is no proper documentation we
can find for them. Attached are some propertyViews (ACC Property
Types.jpg) that we can see in ACC. But there is no proper
documentation available to know which property type should be used for
a specific UI element. 

ACC Property Types.jpg



BCC UI Customization.jpg

Friday, March 9, 2012

Resolving BCC SMI : ATG BCC

SMI : Snapshot Mismatch Issue

Anyone who has worked with the BCC has experienced the dreaded "snapshot mismatch" error when deploying a project or performing a full deployment in the BCC
If you are facing any issue with the snap shot mismatch  while deploying the Assets to Staging,Production,run the below query in the CA

select epub_target.display_name as "Target",
epub_prj_tg_snsht.snapshot_id as "Snapshot",
epub_project.display_name as "Project" from
epub_target, epub_project, epub_prj_tg_snsht
where
epub_prj_tg_snsht.project_id in
(
select project_id from epub_project where
workspace is not null and
checked_in = '1'
) and
epub_target.target_id = epub_prj_tg_snsht.target_id and
epub_prj_tg_snsht.project_id = epub_project.project_id
order by epub_project.checkin_date desc;

You will  get the latest snap shot ID for Staging and Production.

In CA Server,Go to the Component Browser,Browse to the path

http://hostname:port/dyn/admin/nucleus/atg/epub/DeploymentServer

Enter the snapshot id in the property Force snapshot ID click “init” button.
            Do the same for the Staging and Production environment .
            This will resolve the issue

Wednesday, March 7, 2012

Composite Repositories : ATG


All ATG repositories provide a means for representing information in a data store as Java objects. The composite repository lets you use more than one data store as the source for a single repository. The composite repository consolidates all data sources in a single data model, making the data model flexible enough to support the addition of new data sources. Additionally, the composite repository allows all properties in each composite repository item to be queryable. Thus, from the point of view of your ATG application, the composite repository presents a consistent view of your data, regardless of which underlying data store the data may reside in.

The composite repository is a repository that unifies multiple data sources. Its purpose is to make any number of repositories appear in an ATG application as a single repository. The composite repository defines a mapping between item descriptors and properties as they appear to facilities that use the composite repository and item descriptors and properties of the data models that comprise the composite data model. A composite repository is composed of any number of composite item descriptors. Each item descriptor can draw on different data models from different repositories, and map underlying data model attributes in different ways.


Use Example

Suppose you maintain profile data both in an SQL database and an LDAP directory. ATG’s profile repository ships with a user composite item descriptor comprised of just one primary item descriptor and no contributing item descriptors. The primary item descriptor is the user item descriptor. You can add to the composite item descriptor the user item descriptor from the LDAP repository as a contributing item descriptor. If there are any property name collisions between the SQL repository and the LDAP repository, you can resolve them by mapping the properties explicitly to different names in the composite repository configuration. After you’ve done this, your ATG applications can view both LDAP profile information and SQL database profile information as properties of composite items in the composite user item descriptor.

Primary and Contributing Item Descriptors

Each composite item descriptor is composed of any number of contributing item descriptors. One of these contributing item descriptors must be designated as the primary item descriptor. The primary item descriptor’s main purpose is to provide the ID space for the composite item descriptor. The composite item descriptor can incorporate any number of contributing item descriptors, which contribute properties to the composite repository items.

Each contributing item has one or more relationships to the primary item. These relationships are defined in the contributing item descriptor. Each relationship defines a unique ID attribute in the primary item descriptor, as well as a unique ID attribute in the contributing item descriptor. The attribute can be the repository item ID or a unique property. A contributing item is linked to a primary item if the value of its unique ID attribute matches the value of the primary item’s unique ID attribute. If multiple relationships are defined, they are AND’d together.

For example, suppose you have a contributing item descriptor that defines two relationships to the primary item descriptor. One says that a primary item’s firstName property must match the contributing item’s userFirstName property and the other says that the primary item’s lastName property must match the contributing item’s userLastName. These two relationships together mean that a user’s first names and last names must each match for two items to be related. This is useful in situations where no one property uniquely identifies a user. See link-via-property for an example of defining a relationship with two or more properties.

Item Inheritance and Composite Repositories

A composite repository can handle item descriptor inheritance only for its primary item descriptors. For example, suppose you have a user composite item descriptor. Its primary item descriptor is named person and is part of an LDAP repository. The contributing item descriptor is named user and is part of an SQL repository. The user item descriptor has a subtype named broker. The composite items have access to the properties of the person item descriptor and the user item descriptor, but not to properties that exist only in the broker item descriptor.

Transient Properties and Composite Repositories

An LDAP repository does not support transient properties. Therefore, if you want to use transient properties in your composite item descriptor, the transient properties must be derived from an SQL repository or other repository that does support transient properties.

Non-Serializable Items and Composite Repositories

An LDAP repository item is not serializable. Therefore, if you have a property that derives from an LDAP repository item, you should mark the property as not serializable by setting the serialize attribute to false:

<property name="propName" >
   ...
    <attribute name="serialize" value="false"/>
   ...
</property>

Property Derivation

The properties in a composite item descriptor are determined as follows:

1.If configured to do so, all properties from the primary and contributing item descriptors are combined into the composite item descriptor, with each property retaining its property name and property type.

2.Any properties marked as excluded are removed from the composite item descriptor. See Excluding Properties.

3.All property mappings are performed. This means that a primary or contributing property that is to be mapped gets renamed in the composite item descriptor. See Property Mappings.

4.If there are any two properties in the composite item descriptor that have the same name, an error results. The composite repository requires that all composite property names map explicitly to only one primary or contributing property.

Configuring a Composite Repository

1.Design the composite repository. Pick what item types you want to represent in your composite repository’s composite item descriptors

2,Specify the primary item descriptor. This is where the composite repository item’s repository item IDs come from

3.Specify any contributing item descriptors you need to supplement the primary item descriptor.

4.Resolve any property name collisions between properties in the primary item descriptor and the contributing item descriptors. See Property Mappings.

5.Determine whether you want to use static or dynamic linking for properties whose types are repository items. See Link Methods.

6.Determine what item creation policy you want the composite repository to implement. See Creating Composite and Contributing Items.

7.Determine whether there are any properties in your primary or contributing item descriptors that you want to exclude from the composite item descriptor. See Excluding Properties.

8.Create and configure a CompositeRepository component. See Configuring the Composite Repository Component.

Property Mappings

The composite repository requires that all composite property names map explicitly to only one primary or contributing property. If primary or contributing item descriptors contain one or more properties with the same name, you must exclude one of the properties (see Excluding Properties) or map it to another name.

You can map a property with the mapped-property-name attribute in an item descriptor’s property tag. For example, given two contributing item descriptors, where each has a login property, you can map one of the properties to a different name like this:

<property name="ldapLogin" ... mapped-property-name="login"/>

In this example, the name attribute specifies the property name in the composite item descriptor and the mapped-property-name attribute specifies the name of the property in the primary or contributing item descriptor to which this property maps.

Excluding Properties

Sometimes you may not want to expose absolutely every property from the underlying primary and contributing item descriptors in the composite item descriptor. You can configure the item descriptor to exclude those contributing properties that are not desired. You do this by setting a property tag’s exclude attribute to true:

<property name="password ... exclude="true"/>

Link Methods

The link-method attribute determines what happens when the composite repository needs to get a property value that belongs to a contributing repository item. For example, a process might call:

CompositeItem.getPropertyValue("ldapFirstName");

where ldapFirstName is a property of a contributing repository item in an LDAP repository. The CompositeItem that is being asked for the property needs to look for this contributing item. If it can find it, it retrieves the property value and acts according to the value of the link-method attribute: static or dynamic.

Static link method

If link-method is set to static, the contributing item is stored in a member variable of that composite repository item. The next time a property is requested from that same item, it retrieves it from this variable instead of finding it again from the underlying contributing repository. This saves some computational effort and results in faster property retrieval.

If the value of the property or properties used to link to the underlying contributing item changes, the data in the member variable is stale. This occurs only if a linking property in the underlying data store changes. For example, if you link to a contributing item descriptor using a login property, static linking can result in stale data only if the login property changes in an underlying repository.

Dynamic link method

If link-method attribute is set to dynamic, the composite repository queries the underlying repository for the contributing item every time a property is requested from it. This might result in slower performance, but it also means that data is never out of sync at the repository level.

Methods compared

Dynamic link mode might seem like the most technically correct implementation, because the data model is guaranteed to reflect the latest information. Because dynamic link mode requires a query each time information is needed from a composite item, it can impair performance. Usually, the information that links items rarely changes. Static linking is generally provides correct data model linking.



composite-repository-template

The composite-repository-template tag encloses the whole composite repository definition. The composite-repository-template tag encloses a <header> tag and one or more <item-descriptor> tags:

header (Composite Repository)

item-descriptor (Composite Repository)

Example

<composite-repository-template>
  <header>
...
  </header>
  <item-descriptor name="..." />
...
  </item-descriptor>
</composite-repository-template>

header (Composite Repository)

The <header> tag provides information that can help you manage the creation and modification of repository definition files.

For example, the header of your template might look like this:

<header>
  <name>Catalog Template</name>
  <author>Neal Stephenson</author>
  <author>Emily Dickinson</author>
  <version>$Id: catalog.xml,v 1.10 2000/12/24 03:34:26 hm Exp $</version>
  <description>Template for the store catalog</description>
</header>


item-descriptor (Composite Repository)

The item-descriptor tag defines a composite item descriptor.


<item-descriptor name="compositeUser" default="true"
         display-property="fooProperty"
         display-name-resource="itemDescriptorUser">
   <attribute name="resourceBundle"
              value="atg.userprofiling.CompositeProfileTemplateResources"
              data-type="string"/>
   <primary-item-descriptor.../>
   <contributing-item-descriptor.../>
...
</item-descriptor>

1.A primary-item-descriptor tag can enclose one or more property tags.
2.The contributing-item-descriptor has the same attributes as the primary-item-descriptor tag. See primary-item-descriptor and contributing-item-descriptor attributes.



Sample Composite Repository Definition File

<?xml version="1.0" encoding="ISO-8859-1" ?>

<!DOCTYPE scenario-manager-configuration
     PUBLIC "-//Art Technology Group, Inc.//DTD Scenario Manager//EN"
     'http://www.atg.com/dtds/composite-repository/composite-repository_1.0.dtd'>


<!-- composite repository definition -->
<composite-repository-template>


  <!-- Header similar to GSA DTD -->
  <header>
    <!-- name of this document -->
    <name>A sample Composite Repository template</name>
    <!-- author of this document -->
    <author>Graham Mather</author>
    <!-- version of this document -->
    <version>$Change: 226591 $$DateTime: 2002/01/22 15:50:56 $$Author: gm $
    </version>
  </header>

  <!-- composite item descriptor definition -->
  <!-- name: name of the composite item descriptor -->
  <!-- default: is this the composite repository's default item descriptor? -->
  <!-- display-property: the property used when display items of this type -->
  <!-- display-name-resource: resource which defines the display name -->
  <item-descriptor name="compositeUser" default="true"
    display-property="fooProperty"
    display-name-resource="itemDescriptorUser">


    <!-- resource bundle from whence this item descriptor's resources come -->
    <attribute name="resourceBundle"
               value="atg.userprofiling.CompositeProfileTemplateResources"
               data-type="string"/>
    <!-- icon for items of this type -->
    <attribute name="icon" value="userIcon" data-type="string"/>
    <!-- "basics" category sort priority -->
    <attribute name="categoryBasicsPriority" value="10" data-type="int"/>


    <!-- primary view definition -->
      <!-- name: the name of the primary view, as it appears internally to the
      composite repository.  The primary view and all composite views must have
      unique internal view names -->
      <!-- repository-nucleus-name: the nucleus path of the repository in which
      the primary view resides -->
      <!-- repository-item-descriptor-name: the name of the view in the given
      repository which acts as the primary item descriptor for this composite item
      descriptor -->
      <!-- all-properties-propagate: if true, composite repository attempts to
      make all properties in the primary item descriptor available in the
      composite item descriptor.  Default is false  -->
      <!-- all-properties-queryable: if true, all properties in the view are
      queryable unless otherwise specified.  If false, all properties are not
      queryable unless otherwise specified. default is true -->
    <primary-item-descriptor name="user"
      repository-nucleus-name="/atg/userprofiling/ProfileAdapterRepository"
      repository-item-descriptor-name="user"
      all-properties-propagate="true"
      all-properties-queryable="true">


      <!--
      Can also contain explicit property mappings and explicit property exclusions
      -->


      <property mapped-property-name="lastName" exclude="true"/>
      <property mapped-property-name="email" exclude="true"/>


    </primary-item-descriptor>


    <!-- contributing view definition -->
    <!-- name: the name of this contributing view, as it appears to the composite
    repository -->
    <!-- repository-nucleus-name: the nucleus path of the repository in which the
    primary view resides -->
    <!-- repository-item-descriptor-name: the name of the view in the given
    repository which acts as the primary item descriptor for this composite item
    descriptor -->
    <!-- all-properties-propagate: if true, composite repository attempts to make
    all properties in the primary item descriptor available in the composite item
    descriptor.  Default is false  -->
    <!-- all-properties-queryable: if true, all properties in the view are
    queryable unless otherwise specified.  If false, all properties are not
    queryable unless otherwise specified. default is true -->


    <contributing-item-descriptor name="UserProfile-LDAP"
      repository-nucleus-name="/atg/adapter/ldap/LDAPRepository"
      repository-item-descriptor-name="user"
      all-properties-propagate="true"
      all-properties-queryable="true">




    <!-- explicit property mapping
    sometimes it's advantageous to explicitly map a property in a composite view
    to a particular property in either the primary or a contributing view.
    For example, perhaps two contributing views have properties with the same
    name. This gets around the "no contributing views with same property names"
    rule.
    -->


    <!-- name: name of this composite property -->
    <!-- mappedPropertyName: the property to which this property maps -->
    <!-- queryable: property queryable flag -->
    <!-- required:  property required flag-->
    <!-- expert: property expert flag -->
    <!-- hidden: property hidden flag -->
    <!-- readable: property readable flag -->
    <!-- writable: property writable flag -->
    <!-- category-resource: resource for category name -->
    <!-- display-name-resource: resource for display name -->
    <property name="ldapFirstName" mapped-property-name="firstName"
    queryable="false" required="false" expert="false"
    hidden="false" readable="true" writable="true"
    category-resource="categoryBasics"
    display-name-resource="ldapFirstName">


      <!-- bundle for this property's resources -->
      <attribute name="resourceBundle"
      value="atg.userprofiling.CompositeProfileTemplateResources"
      data-type="string"/>
      <!-- flag for ui being able to write this property -->
      <attribute name="uiwritable" value="true" data-type="boolean"/>
      <!-- maximum length for this property -->
      <attribute name="maxLength" value="32" data-type="int"/>
      <!-- does this property's value have to be unique? -->
      <attribute name="unique" value="true" data-type="boolean"/>
      <!-- sort priority -->
      <attribute name="propertySortPriority" value="10" data-type="int"/>


    </property>


    <!-- explicit property exclusion
    Sometimes users will not want to expose absolutely every property from
    the underlying primary and contributing views in the composite view. An
    explicit property removal allows the user to make the composite view
    contain only those contributing properties that are desired.
    -->
    <property mapped-property-name="login" exclude="true"/>
    <property mapped-property-name="password" exclude="true"/>
    <property mapped-property-name="id" exclude="true"/>


    <!--
    2) a composite view's property names are determined thusly:

       a) If all-properties-propagate is true, all properties from the primary and
       contributing views are combined into the composite view, retaining their
       property names, property types, and any metadata they may have defined.

       b) All property exclusions are performed.  This means that any properties
       to be excluded are removed from the composite view.

       c) All property mappings are performed.  This means that a primary or
       contributing property that is to be mapped gets renamed in the composite
       view.

       d) If there are any two properties in the composite view that have the same
       name, error.  The composite repository requires that all composite property
       names map explicitly to only one primary or contributing property.

      -->

      <!-- the primary view link describes how items in the contributing view are
       linked to items in the primary view.  For each primary-contributing
       relationship, the user picks a unique id attribute for the primary and the
       contributing view. The attribute can be either the repository id of the
      item or a uniquely-valued property of the item (e.g. login).  A primary item
      is linked to a contributing item if its unique id attribute value matches
      the unique id attribute value of the contributing item. There must be at
      least one primary view link, but there is primary view link limit.  -->

      <!-- example: this primary view link defines a relationship where an item in
      the primary view is linked to an item in this contributing view if the
      contributing item has a repository id which is the same as the primary
      item's id.
-->

<!--
      <primary-item-descriptor-link>
        <link-via-id/>
      </primary-item-descriptor-link>
-->

      <!-- OR:

      This primary view link defines a relationship where a primary view item is
      linked to an item in this contributing view if the value of the primary
      item's "login" property matches the value of the contributing item's
      "userLoginName" property.
      -->

      <primary-item-descriptor-link>
        <link-via-property primary="login" contributing="login"/>
      </primary-item-descriptor-link>

      <!-- OR:

      This primary view link defines a relationship where a primary view item is
      linked to an item in this contributing view if the value of the primary
      item's "firstName" property matches the value of the contributing item's
      "userFirstName" property AND the value of the primary item's "lastName"
       property matches the value of the contributing item's "userLastName"
       property.  This is useful in the case where no one property in the primary
       view or the contributing view is uniquely valued. The relationships are
       ANDed together

      <primary-item-descriptor-link>
        <link-via-property primary="firstName" contributing="userFirstName"/>
        <link-via-property primary="lastName" contributing="userLastName"/>
      </primary-item-descriptor-link>

      -->


    </contributing-item-descriptor>


  </item-descriptor>

Popular Posts