I have posted over the last couple of months about an adapter class that I have been using for using Azure table storage. The adapter has been really useful to bridge between Azure table entities and application domain models. There have been some interesting scenarios that have been discovered while using this technique. Here has been the history so far:
I posted earlier this year about an adapter class to be the bridge between ITableEntity and a domain model class when using Azure table storage. I hit a problem with this today when I was dealing with a model class that had a Timestamp property.
While the adapter class is intended to encapsulate ITableEntity to prevent it leaking from the data layer, this particular model actually wanted to expose the Timestamp value from ITableEntity. This didn’t go down too well.
I’ve just hit a StorageException with Azure Table Services that does not occur in the local emulator.
Unexpected response code for operation : 5
The only hit on the net for this error is here. That post indicates that invalid characters are in either the PartitionKey or RowKey values. I know that this is not the case for my data set. It turns out this failure also occurs for invalid data in the fields. In my scenario I had a null value pushed into a DateTime property. The result of this is a DateTime value that will not be accepted by ATS.
I got a request for an example of how to use the EntityAdapter class I previously posted about. Here is an example of a PersonAdapter.
I recently posted about an EntityAdapter class that can be the bridge between an ITableEntity that Azure table services requires and a domain model class that you actually want to use. I found an issue with this implementation where TableEntity.ReadUserObject and TableEntity.WriteUserObject that the EntityAdapter rely on will only support mapping properties for types that are intrinsically supported by ATS. This means your domain model will end up with default values for properties that are not String, Binary, Boolean, DateTime, DateTimeOffset, Double, Guid, Int32 or Int64.
I hit this issue because I started working with a model class that exposes an enum property. The integration tests failed because the read of the entity using the adapter returned the default enum value for the property rather than the one I attempted to write to the table. I have updated the EntityAdapter class to cater for this by using reflection and type converters to fill in the gaps.
I’ve been running this check in procedure for several years with my development teams. The intention here is for developers to get their code into an acceptable state before submitting it to source control. It attempts to avoid some classic bad habits around source control, such as:
- Check in changes at the end of each day
- Missing changeset comments
- Using the build system as point of compiler/quality validation
- Big bang changesets
- Cross purpose changesets
I’ve finally gotten around to adding some reg files for using WinMerge with VS2013. You can download them from the bottom of my Using WinMerge with TFS post. These reg files will configure VS2013 to use WinMerge for TFS diff/merge operations (no Visual Studio restart is required).
When working with Azure Table Storage you will ultimately have to deal with ITableEntity. My solution to date has been to create a class that derives from my model class and then implement ITableEntity. This derived class can them provide the plumbing for table storage while allowing the layer to return the correct model type.
The problem here is that ITableEntity is still leaking outside of the Azure DAL even though it is represented as the expected type. While I don’t like my classes leaking knowledge inappropriately to higher layers I also don’t like plumbing logic that converts between two model classes that are logically the same (although tools like AutoMapper do take some of this pain away).
Writing records to Azure Table Storage in batches is handy when you are writing a lot of records because it reduces the transaction cost. There are restrictions however. The batch must:
- Be no more than 100 records
- Have the same partition key
- Have unique row keys
Writing batches is easy, even adhering to the above rules. The problem however is that it can start to result in a lot of boilerplate style code. I created a batch writer class to abstract this logic away.
My upgrade pain with VS2013 and the Azure SDK 2.2 continues. Hosted build now fails with the following error:
The task factory "CodeTaskFactory" could not be loaded from the assembly "C:\Program Files (x86)\MSBuild\12.0\bin\amd64\Microsoft.Build.Tasks.v4.0.dll". Could not load file or assembly 'file:///C:\Program Files (x86)\MSBuild\12.0\bin\amd64\Microsoft.Build.Tasks.v4.0.dll' or one of its dependencies. The system cannot find the file specified.
While my Polish is non-existent, the answer can be found at http://www.benedykt.net/2013/10/10/the-task-factory-codetaskfactory-could-not-be-loaded-from-the-assembly/. The project templates for the web and worker role projects uses ToolsVersion=”12.0”. This needs to be changed to ToolsVersion=”4.0” for hosted build to be successful.