Tuesday, December 8, 2015

Using the XMLTransmit pipeline to write your own custom XML declaration

A question in the MSDN BizTalk Forums caught my eye a few days ago. It was about wanting to set the attribute standalone="yes" in the XML declaration on an outbound message.

I have never had the need to set this attribute before, so wasn't sure of whether the map properties or any other setting could make it be written. The other responses in the thread told be that it couldn't. It was then I thought of an old post I wrote regarding using the XMLTransmit pipeline properties to write an xml-stylesheet reference to a message. This surely had to be able to write a custom XML declaration instead. And it did.

What you basically do is set the property AddXmlDeclaration to false, and then write your full custom XML declaration in the property XmlAsmProcessingInstructions. This will write out your custom string as the XML declaration. Or in reality, it will not write it as the declaration, but since we omit the real one, it will take the place of it.

Just make sure to have the other properties correctly set (the charset) or it might not match the document itself which may or may not cause havok in the receiving end.

Friday, October 9, 2015

Faster filtering in PowerShell than using Where-Object

In a project I'm using a PowerShell script to read in a lot of .csv files and then do some lookups between these in order to get the wanted output. This all works well, but has been a bit slow lately due to a lot more data in the files.

I narrowed the speed issue down to a few lines where I iterate over some files in order to find a few rows that I need. Basically, the Where-Object cmdlet is the culprit.

A simplified example is below:

$mycsvfile = Import-Csv .\mydata.csv
$dataiwant = $mycsvfile | Where-Object {$_.idno -eq 5}

I had a few similar lines in the script, making the time to select the data add upp to 9 seconds, which was far to long in this case.

I found this post that show a few variants on filtering collections, and by simply using the PS v4 .Where() notation to do the same thing, I could bring this down to a single second.

$mycsvfile = Import-Csv .\mydata.csv
$dataiwant = $mycsvfile.Where({$_.idno -eq 5})

So lesson learned: PowerShell is evolving quickly and what I thought was a nice way to do something might very well be just fine, but there might also be a quicker way just around the corner. In this example, I'm sacrificing the streaming capabilities, but gain a lot of performance, just by changing a few characters.

Tuesday, September 29, 2015

XSD in Visual Studio and the warning message "Request for the permission of type 'System.Security.Permissions.FileIOPermission"

When working in a project in Visual Studio that had a schema repository added, the warning message "Request for the permission of type 'System.Security.Permissions.FileIOPermission" showed up in various places where I had schema imports. The warning did not show up everywhere though and the project built just fine.

It turned out that the reason was that some of the xsd files where downloaded or copied from a network share and were therefore blocked in Windows, leading up to the warning message.

The warnings could be removed by simply unblocking the offending files, and restarting Visual Studio.

This also lead me to the quick way to unblock a lot of files at once. Simply use PowerShell to iterate over the files and use the Unblock-File cmdlet:
gci -r | Unblock-File

Tuesday, November 4, 2014

Raspberry Pi, kill the tvservice if you are not using HDMI

I'm running a Raspberry Pi at home for various tasks and access it remotely using SSH for all maintanence tasks. This means that the HDMI control in the Pi is unused and hence can be switched off for maybe not better performance, but at least for a lower working temperature.

When logged in, you can switch the HDMI off by running
tvservice -o

However, this will reset at boot time. So one way to handle this is to simply add the command to the boot scripts.

I did it the quick way by adding the command to the boot sequence:
vi /etc/init.d/customboot.sh

# Provides:          customboot
# Required-Start:    networking
# Required-Stop:     networking
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Custom boot commands
# Description:       Custom boot commands

echo "Turning off tvservice"
/opt/vc/bin/tvservice -o

Then add permissions and add the script to the boot sequence
sudo chmod +x /etc/init.d/customboot.sh
sudo update-rc.d customboot.sh defaults

By doing this, I lowered the idle temperature of my Pi with about two degrees C:

Thursday, July 3, 2014

Slow performance in PowerShell Get-ChildItem over UNC paths

I got a ticket submitted to me that proved to be quite interesting. The basics where that a specific maintenance script were running for ages on a server causing some issues.

The script itself was quite simple, it did a recursive gci on a path and performed an action on each item matching a where clause that identified empty folders. Nothing wrong there. However, even if the specified path contained a massive amount of folders and files, it still shouldn't have been running for days (literally) causing the ticket to be created.

When looking into the matter, I found this blog post on the Windows PowerShell blog that goes into detail why Get-ChildItem has slow performance at times. The main culprit is the .NET APIs that is too chatty and causes a lot of overhead traffic over the network when trying to query for files. This was fixed in PowerShell 3.0 that uses new API:s.

A quick test using a folder with 700 subfolders and a total of 40 000 files where I execute the script line

(gci d:\temp -Recurse | where-object {$_.PSIsContainer}) | Where-object {($_.GetFiles().Count + $_.GetDirectories().Count) -eq 0} | out-file d:\dir.txt

reveals the following execution time numbers:

Using PowerShell 2.0 and accessing the path as an UNC: 100s
Using PowerShell 3.0 and accessing the path as an UNC: 33s
Using PowerShell 2.0 and accessing the path locally: 6s
Using PowerShell 3.0 and accessing the path locally: 5s

In my case, the server was both running PowerShell 2.0 and accessing the path as an UNC causing a major performance hit. The solution is simply to both run the script locally on the server where the task has to be performed as well as upgrading to at least PowerShell version 3.0. As can be seen from my quick test, the best performance gain is clearly to run the script locally giving about 17 times better performance.

While no rocket science that accessing tons of files locally has to be faster then doing it over the network, it is still fairly common to see scripts executed on application servers and performing tasks on file shares.

This also proves that it is important in our business to stay on top of new versions of tools and frameworks in order to catch these types of improvements. I can be certain that there is quite a number of people out there bashing gci heavily for it's slow performance when in fact it has improved a lot between PowerShell 2.0 and 3.0.

Tuesday, June 24, 2014

Base64 encode/decode part of messages in map - part 2

Back in November I wrote a post about encoding messages as base64 strings in a BizTalk map. I never added the explicit implementation of how to decode the message though. However, I was asked to provide an example of it, so here is part 2: how to decode a base64 string from an XML message and output it in BizTalk.

Consider the scenario to be the opposite of what we did in part 1.

We have a message on the input side that has two fields, one with an id as a string, and another string element that holds the entire base64 encoded string.

<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema1">


On the output side, we want to have a message that conforms to the schema we have defined. The message is the entire contents of the base64 encoded string in the input.

If we parse the encoded string, we get:
<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema2">

Which conforms fully to the schema defined.

So, the task is to extract the contents from the EncodedData element, decode them, and use the full string as the output message. All in a single BizTalk map.

The map will look like this to begin with, with the two schemas chosen:

Similar to when we encoded, we add first a scripting functoid that has no connections, and paste the code for the decode function in it:

public string Base64DecodeData(string param1)
            byte[] decodedBytes = Convert.FromBase64String(param1);
            return System.Text.UTF8Encoding.UTF8.GetString(decodedBytes);

We simply take a string as an input parameter, decode this and return the decoded string back.

Then we create a new scripting functoid that we set to inline Xsl and paste this code into it:

<xsl:variable name="data">
<xsl:value-of select="//EncodedData" />
<xsl:value-of select="userCSharp:Base64DecodeData($data)" disable-output-escaping="yes" />

This functoid is connected to the output root node giving us a map that looks like this:

When executing this map with the input message above, we will get the output message properly formatted.

The tricks used here is two. First off, we use the same pattern as before with calling a predefined function we have in another scripting functoid in order to call a simple C# function from our XSL code. Then in the XSL, we first extract the string from our input message and store it in a variable. This is then passed into our C# function and we get a string back. However, if we do not specify the disable-output-escaping="yes" in our value-of select, we would get the string entity encoded. With this extra property set, the string will be output as it is and the way we want it.

The same technique can of course easily be used to just output part of a message by simply connecting the scripting functoid to the node you want to populate (if for instance you have a schema that has a node defined as xs:any that you want to use).

Monday, June 2, 2014

Error 5644 when trying to enable a SQL Server notification receive location in BizTalk Server

When using the SQL Server broker functionality in order to have SQL Server push notifications to BizTalk instead of polling a table, I've found that the following error is quite common to encounter.

 The Messaging Engine failed to add a receive location "Event1" with URL "mssql://localhost//EventDB" to the adapter "WCF-Custom". Reason: "Microsoft.ServiceModel.Channels.Common.TargetSystemException: The notification callback returned an error. Info=Invalid. Source=Statement. Type=Subscribe.
The error message tells you that the statement entered is invalid, but not in which way. There are a lot of rules to comply with that all are available on MSDN: http://msdn.microsoft.com/en-us/library/ms181122.aspx And beside the "normal" ones such as "do not use aggregate functions" and "do not use an asterisk to define the resultset", one that constantly haunts me are "table names must be qualified with two-part names", meaning that you have to prefix all tables with the schema name, such as "dbo.EventTable". Just using select blah from EventTable where processed=0 will generate the error above.

Thursday, May 29, 2014

SQL Server: Invalid prefix or suffix characters error message

When trying to do different operations in SQL Server Management Studio such as "Edit Top 200 Rows", you might get the error message "Invalid prefix or suffix characters. (MS Visual Database Tools)".

This is most likely due to the fact that the Management Studio tool is an older version than the database you are connected to and trying to perform the operation on. For instance, using SQL Server Management Studio for SQL Server 2008R2 and connecting to a SQL Server 2012 database will render the above error message when trying to perform the "Edit Top 200 Rows" operation on a table or view.

The solution is to use the same version of Management Studio as the database in question.

Friday, April 4, 2014

BizTalk with codeplex SFTP adapter, doing a file move on AfterGet

A client has a setup with the Codeplex SFTP adapter, reading files from a site and then deleting them when read using the Delete parameter in the AfterGet property within the receive location.
A need arised where the files would need to be moved to an archive folder in the SFTP site instead of being deleted.

There is no mention of this in the documentation of the adapter, however, by checking the underlying code one can see how we can do the move.

The OnBatchComplete method located in BatchHandler.cs, will be called when a message has been sent successfully to BizTalk and hence be removed/renamed on the SFTP server. The line

string renameFileName = CommonFunctions.CombinePath(Path.GetDirectoryName(fileName), batchMessage.AfterGetFilename);

Is the interesting part. We see that when the Rename option is chosen in the adapter properties, the  CombinePath method will be called to create the new full path for the file based on the current directory where it is located, and a new filename. This can be utilized by simply setting the new filename to include a relative path as well as the filename and by doing that create a move of the file and have the rename optional. The latter since the macro replacements will be done after the creation of the full path.

Suppose we read files located in /home/marcus/btstest/ and after reading these should be moved to /home/marcus/btstest/archive/. In this case, you would set the AfterGet File Name property to read archive/%SourceFileName%

If you instead want to move the files to a directory outside of your current path, you can do so by specifying the parent by the usual two dots (..) like for instance ../btstestarchive/%SourceFileName% for the directory located at /home/marcus/btstestarchive/.

One issue can occur though if you use this technique to move files to a file archive in order to have a log on the ftp server side of all files that has been processed. If the filename is already available in the archive folder and the adapter tries to move yet another file to this folder with the same name, it will fail. It will be logged in the event log as "Method: Blogical.Shared.Adapters.Sftp.SharpSsh.Sftp.Rename
Error: Unable to rename /home/marcus/btstest/testfile.xml to /home/marcus/btstest/archive/testfile.xml
", but the original file will be left in the pickup folder on the SFTP site.

BizTalk will not try to process this file again, until you restart the host that is. Then it will be picked up once more, sent to the messagebox and the rename function will fail again. You could make it a bit safer by using the %DateTime% or %UniversalDateTime% macros that also are available besides %SourceFileName%. Yet another option is to extend the BatchHandler.cs with the %MessageID% macro and create a GUID to add to the filename in order to get it truly unique.

Monday, March 31, 2014

Fix for jerky mouse movements in remote desktop/Citrix sessions to Windows Server 2012

When connecting with Citrix to a Windows Server 2012, you might experience that the mouse pointer is lagging severely causing a jerky movement and making it a complete dud trying to make any proper work on the server.

If this is the case, you should make sure that the Enable pointer shadow checkbox in the Mouse Properies is unchecked. This feature has been the issue with many machines I've encountered lately and unchecking the property has made the UI more enjoyable.