Thursday, September 10, 2009

Dynamics AX AIF Adapter Progress

Well after some initial struggles with locked channels (never resolved this, just built a new VPC!!) I have both the AIF tutorial and some PoCs working. 

Just to clarify that in a previous post I mentioned the latest issue of BizTalk HotRod which has an article on the adapter.  While most of the configuration is the same, it’s important to note that the article uses Dynamics AX 4.0, not 2009.

A particular challenge I recently had to overcome was regarding security on the send port.  I had setup and tested the AIF tutorials using a Proxy User (providing an AX user account/password right in the send port).  This worked great.  However, once I wanted to enable another document service (namely LedgerPurchaseInvoiceService) and follow an identical approach (correctly assigning a Data Policy, verifying my endpoints) I continued to get errors in the event log indicating permission was denied.  I was using the administrator account as both the BC service account and the gateway user.  I looked at a few other reports of similar issues but couldn’t get it working.

Some colleagues had mentioned that security configuration with AIF can be a challenge, and particularly that using anything other than the Host User configuration can sometimes just not work.  I remain convinced that it should work, I’m just not doing something right.  However I didn’t have a lot of time to debug it and/or open a PSS ticket, so I proceeded to change the send port to use the identity of the Host User (ensuring the service account was a user in DAX with the right permissions) and it worked. 

I am still concerned about the cause of issue with the Proxy User configuration, but the reality is that it likely makes more sense to use the host instance account for authentication regardless as it simplifies deployment (no password to maintain) and you can keep whatever degree of account isolation you need (one account for all Host Instances, or one per endpoint/service).

OutOfMemoryException on MemoryStream in Pipeline Component

On one of my projects, I was using a custom pipeline component to decompress a file in a send pipeline (*see side note on design decision at end of post).  In the production environment, we began to see pipeline failures caused by mscorlib OutOfMemoryExceptions in the pipeline component.  It was happening sporadically, so at first was not clear what was happening, though we knew the issue was occurring with growing frequency, roughly in correlation to the increase in the size of the file (the compressed file was a statement of account balances which grew as the number of accounts grew).

As many BizTalkers do, I use the SharpZipLib library for Zip compression (see Pro BizTalk 2006 by Dunphy and Metwally for a great example of this) and was taking the approach of loading the zipstream into a MemoryStream object.  The exception was being thrown in my loop which copied 4K segments into the stream.

After consulting my trusted advisor I saw a few discussions related to the need to provide a contiguous segment of memory for a MemoryStream to work.

I quickly inferred the likely culprit: I was trying to load data into a MemoryStream without the OS knowing how much memory to allocate it, so in many cases it was allocating a contiguous block that would not be enough for the uncompressed file.

Short term fix:

For the time being, I have simply updated my pipeline component to declare the size of the MemoryStream up front based on the zipstream’s Size property which is the number of bytes of the fiie uncompressed.

DANGER: the Size property is a Long, whereas you can only instantiate a MemoryStream with an Int32 (of course from the 2GB memory limit for 32-bit processes).  Knowing the file size will not grow beyond 1GB uncompressed, I have squished the long into an int, which of course is terrible.  Hence a long-term fix:

Long term fix:

Though I have yet to implement this, the sensible approach is instead to use something like the VirtualStream which will offload data to the filesystem if a stream exceeds a configured size, saving your poor BTSNTSvc.exe

Indeed hardware should never be used to mask up bad code, but it’s interesting to consider that this issue would likely not have arisen if it had been deployed on a 64-bit OS, which we can hopefully encourage all clients to do in the future.

 

* as a side note, the unzip was done in a send pipeline, in opposition to the usual approach of decompressing a file in a receive pipeline because the contents of the file were not needed.  All we wanted to do was route based on the file name, so by delaying the decompression we only needed to load a 16MB compressed file to the messagebox instead of a 700MB uncompressed file.