Suspense Mapping – You Can But Should You?

I was recently working with a client that asked me if could setup FDM to map dimensions that aren’t in HFM to a suspense member.  Before we dive into this, let’s take a moment to level set.  This particular client has Oracle eBS as a general ledger.  The dimensionality in HFM will exactly mirror a subset of the segments that make up the general ledger corporate chart of accounts.  The client is using ERPi in conjunction with FDM.  A completely lights out process has been designed that allows an entire grouping of eBS sets of books to be loaded at specific intervals.  The intersection validation report is disabled because the client does not want to stop the load of an entire set of books because of a few errant records.

OK, so we have a general idea of the landscape.  Since the eBS Chart of Accounts is the basis for the input level members of each HFM dimension, we are utilizing pass through Like maps (* to *) where the eBS codes simply map to themselves in HFM.   For example, GL account 10000 in eBS equals HFM account 10000.  This mapping is used across all of the mappable dimensions –  Entity, Account, ICP, Custom1-4.

Given that background, you tell me, can this client use Suspense mapping?  I imagine 1 of 3 answers to this virtual question:

  1. No
  2. Possibly
  3. Hey wise guy, I’m reading your blog to learn more.  You tell me!

Well there are “right” 2 answers, No and Possibly.  Let’s start with the easier of the two – No.  Before we get to why the answer is No, let’s talk about the Suspense mapping functionality that FDM offers.

FDM batch loader (for lights out data processing) includes a functionality called Batch Auto Correct.  Conceptually, this feature will map any unmapped dimension member to a designated “suspense” member when the validate workflow step fails due to unmapped items.  With interactive processing, this is the upper grid on the Workflow that shows which source/GL codes need to be mapped.








This feature requires that the batch load script is instructed to use this functionality.

lngProcessLevel = 50
strDelimiter = “~”
blnAutoMapCorrect = True 
Set BATCHENG.PcolFiles = BATCHENG.fFileCollectionCreate(CStr(strDelimiter)) 
BATCHENG.mFileCollectionProcess BATCHENG.PcolFiles, CLng(lngProcessLevel), , CBool(blnAutoMapCorrect)

Once this change is made, a few additional steps need to be completed.  A new data load location needs to be created and named AutoMapCorrect.  In each dimension an explicit map needs to be created.  The map must have a source value of AutoCorrect and the target member should be the HFM (or Essbase/Enterprise/HSF) member to which unmapped records should be loaded.

In the above example, any unmapped GL account would be loaded to the Holding HFM account when data is loaded using the batch loader.

Well now we know that batch loader can use “Suspense” mapping.  So why is the answer No for this client.  Very simply, the use of pass through (* to *) mappings means that there will never be an unmapped source code.  Every source code is mapped.  It may not be mapped to a valid HFM member but it is indeed mapped.  The autocorrect functionality would never be triggered.

If you’ve read my blog for a while, skip the next sentence.  If you are a new visitor, the consulting answer is very rarely Yes or No.  It’s always about creative ways of addressing a need.  So in this case, how do I make the answer change from No to Possibly?

In the spirit of the consulting frame of mind, there are a couple of options.  Let’s take a look at the out of the box solution first.  As outlined above, batch autocorrect (not to be confused with Apple Autocorrect which could end a perfectly good friendship) gives us the “suspense” mapping capability.  So if we wanted to use that in this client, the “fix” is to make each of our maps Explicit instead of a pass through Like map.  GL Account 10000 maps to HFM Account 10000.

The main benefit to this approach is that you would never have load errors related to unmapped/invalid members.  However, invalid intersections could still exist resulting in load errors.  Invalid intersections could result from improperly defined metadata within HFM, particularly invalid Custom Top member settings for accounts.  Likewise, errant postings in the GL could result in invalid intersections.  For example, a trading partner being assigned to a GL record where the account is third-party.  While each of these individual segments would be mapped to a valid member in HFM, the intersection would be invalid and therefore fail to load.  This reduces the value of this benefit.

Next, let’s examine the drawbacks associated with this Explicit Mapping approach.  First and foremost, this would be a very maintenance intense application.  Every single segment values added to eBS would need to be mapped in FDM.  This is unnecessary overhead for the administrator with little to no value add.  I’ll expand on this more in a moment but before I do, let’s discuss another important drawback.

Any data loaded to the “suspense” entity would need to be manually cleared from HFM.  This is because of how HFM clears data during a data reload process.  Only the entities that are part of the current data set are cleared.  Once the mapping is added to FDM for the eBS company, the “suspense” entity is no longer part of the data set and would therefore not be cleared.  While there are custom FDM processes that can be added to an application to address this situation, the pure out of the box solution needs to consider this drawback.

Finally, the suspense/autocorrect process only executes during lights out executions.  Any time a user would interactively load the data, this functionality would not be invoked.  While these ad hoc executions are expected to be infrequent, it still needs to be considered.

So let’s close the loop on this.  The client wants to map any unmapped segment values to a suspense member.  So you have to ask this question – what do you gain by using this approach?  This is the exact question I asked and the answer I received was – we don’t want the good data to not load because of the unmapped items in the data set.  And now we really get to the crux of the issue.  We need to share a bit more knowledge on FDM and HFM.

Let’s quickly revisit the main points of our configuration:

  • Pass through maps for each dimension
  • Intersection Validation is disabled

What does this mean in terms of making sure we don’t prevent the good records from loading to HFM?  The answer is the current configuration already meets this requirement.  HFM by default will load all of the valid records in a data file.  It does not stop the load when it encounters an error.  So if I attempt to load 100 records to HFM and record 65 is invalid, 99 records are still loaded to HFM, not 64.  The errors are collected in a comprehensive error log which can be reviewed by the administrator and/or end-user.  The added benefit of this is the error log will show us the exact intersection of data that failed to load.

Conversely, if we mapped to a suspense member, we would need to run a report in FDM that shows us all of the source dimensions that were mapped to the suspense members.  While this can be done fairly easily, it’s another unnecessary added step.

Alright so the out of the box option is likely not viable.  So let’s briefly look at a custom solution that I considered & presented as an option.  A custom process could be created that allows pass through maps to be used.  Prior to exporting data from FDM, the mapped values could be compared against the HFM metadata.  Any mapped dimension members that are not found in HFM could be changed to load to a suspense member.  This would prevent data load errors related to invalid members.  Another benefit is that this process could be configured to run during lights out batch loads as well as ad-hoc interactive executions.

Great, there’s our answer!  Not so fast.  Let’s talk about the drawbacks.  First, this is a custom process.  This means that the client would need to support this throughout the application’s life.  With the natural turnover that exists in any organization, this can become a risk.  Second, the aforementioned problem with data in a suspense entity would also need to be considered.  Third, the problem of loading to an invalid intersection could still exist which would result in a load error.  Finally, and not to be underestimated, this process adds additional processing time to the workflow steps.  It’s hard to quantify without building and testing but for organizations with large data, this is an important variable to consider.

So the punch line is, yes you can map to a suspense member but you really need to understand if there is a valid reason to do so.  In this example, I do not feel there is a strong argument for using this functionality.  As I mentioned in my last post, I try to use the phrase “best practice” sparingly.  When I do apply that term as a counterargument, I strive to quantify the reasons why I think the proposed solution is outside the bounds of a “best practice” so that the client can make an educated decision.  I hope the above has illustrated this approach.

Let me close by asking you, what would be the right answer for you?

This entry was posted in Automation, Best Practices, Extending FDM, Out of the Box Functionality and tagged , , , , . Bookmark the permalink.

Leave a Reply