Generic Func IEqualityComparer

Developers generally like the LINQ syntax with all its lambda goodness. It is fluent and easy to write. Then you do something like dataSet.Intersect(otherData, OHNO!).

Signatures like the LINQ Intersect function seems to just get in the way of productive development. With so many things in a lambda syntax, we are now forced back into the world of IEqualityComparer. The easy fix is to drop in something like a generic equality comparer that will support a Func.

public class PredicateComparer<T> : IEqualityComparer<T>
{
    private readonly Func<T, T, bool> _comparer;

    public PredicateComparer(Func<T, T, bool> comparer)
    {
        _comparer = comparer;
    }

    public bool Equals(T x, T y)
    {
        return _comparer(x, y);
    }

    public int GetHashCode(T obj)
    {
        // We don't want to use hash code comparison
        // Return zero to force usage of Equals
        return 0;
    }
}

This little helper doesn’t totally fix the syntax problem, but does limit how big your coding speed bumps are. For example:

var matchingEntities = allEntities.Intersect(
    subsetOfEntities,
    new PredicateComparer<MyEntityType>((x, y) => x.Id == y.Id));

Easy.

Azure Table Storage Adapter - Fixes and Features

I have posted over the last couple of months about an adapter class that I have been using for using Azure table storage. The adapter has been really useful to bridge between Azure table entities and application domain models. There have been some interesting scenarios that have been discovered while using this technique. Here has been the history so far:

This class has evolved well although two issues have been identified.

  1. The last post indicates a workaround where your domain model exposes a property from ITableEntity (namely the Timestamp property). While there is a workaround, it would be nice if the adapter just took care of this for you.
  2. Commenter Andy highlighted a race condition where unsupported property types were not read correctly where the first operation on the type was a read instead of a write (see here)

The race condition that Andy found is one of the production issues I have seen on occasion that I was not able to pin down (big shout out and thanks to Andy).

This latest version of the EntityAdapter class fixes the two above issues.

namespace MySystem.Server.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Diagnostics.CodeAnalysis;
    using System.Globalization;
    using System.Linq;
    using System.Reflection;
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;

    /// <summary>
    ///     The <see cref="EntityAdapter{T}" />
    ///     class provides the base adapter implementation for reading and writing a POCO class with Azure Table Storage.
    /// </summary>
    /// <typeparam name="T">
    ///     The type of value.
    /// </typeparam>
    [CLSCompliant(false)]
    public abstract class EntityAdapter<T> : ITableEntity where T : class, new()
    {
        /// <summary>
        ///     The synchronization lock.
        /// </summary>
        /// <remarks>A dictionary is not required here because the static will have a different value for each generic type.</remarks>
        private static readonly object _syncLock = new object();

        /// <summary>
        ///     The additional properties to map for types.
        /// </summary>
        /// <remarks>A dictionary is not required here because the static will have a different value for each generic type.</remarks>
        private static List<AdditionalPropertyMetadata> _additionalProperties;

        /// <summary>
        ///     The partition key
        /// </summary>
        private string _partitionKey;

        /// <summary>
        ///     The row key
        /// </summary>
        private string _rowKey;

        /// <summary>
        ///     The entity value.
        /// </summary>
        private T _value;

        /// <summary>
        ///     Initializes a new instance of the <see cref="EntityAdapter{T}" /> class.
        /// </summary>
        protected EntityAdapter()
        {
        }

        /// <summary>
        ///     Initializes a new instance of the <see cref="EntityAdapter{T}" /> class.
        /// </summary>
        /// <param name="value">
        ///     The value.
        /// </param>
        protected EntityAdapter(T value)
        {
            Guard.That(value, "value").IsNotNull();

            _value = value;
        }

        /// <inheritdoc />
        [SuppressMessage("Microsoft.Design", "CA1062:Validate arguments of public methods", MessageId = "0",
            Justification = "Parameter is validated using CodeGuard.")]
        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            Guard.That(properties, "properties").IsNotNull();

            _value = new T();

            TableEntity.ReadUserObject(Value, properties, operationContext);

            var additionalMappings = GetAdditionPropertyMappings(Value, operationContext);

            if (additionalMappings.Count > 0)
            {
                ReadAdditionalProperties(properties, additionalMappings);
            }

            ReadValues(properties, operationContext);
        }

        /// <inheritdoc />
        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            var properties = TableEntity.WriteUserObject(Value, operationContext);

            var additionalMappings = GetAdditionPropertyMappings(Value, operationContext);

            if (additionalMappings.Count > 0)
            {
                WriteAdditionalProperties(additionalMappings, properties);
            }

            WriteValues(properties, operationContext);

            return properties;
        }

        /// <summary>
        ///     Builds the entity partition key.
        /// </summary>
        /// <returns>
        ///     The partition key of the entity.
        /// </returns>
        protected abstract string BuildPartitionKey();

        /// <summary>
        ///     Builds the entity row key.
        /// </summary>
        /// <returns>
        ///     The <see cref="string" />.
        /// </returns>
        protected abstract string BuildRowKey();

        /// <summary>
        ///     Clears the cache.
        /// </summary>
        protected void ClearCache()
        {
            lock (_syncLock)
            {
                _additionalProperties = null;
            }
        }

        /// <summary>
        ///     Reads the values from the specified properties.
        /// </summary>
        /// <param name="properties">
        ///     The properties of the entity.
        /// </param>
        /// <param name="operationContext">
        ///     The operation context.
        /// </param>
        protected virtual void ReadValues(
            IDictionary<string, EntityProperty> properties,
            OperationContext operationContext)
        {
        }

        /// <summary>
        ///     Writes the entity values to the specified properties.
        /// </summary>
        /// <param name="properties">
        ///     The properties.
        /// </param>
        /// <param name="operationContext">
        ///     The operation context.
        /// </param>
        protected virtual void WriteValues(
            IDictionary<string, EntityProperty> properties,
            OperationContext operationContext)
        {
        }

        /// <summary>
        ///     Gets the additional property mappings.
        /// </summary>
        /// <param name="value">The value.</param>
        /// <param name="operationContext">The operation context.</param>
        /// <returns>
        ///     The additional property mappings.
        /// </returns>
        private static List<AdditionalPropertyMetadata> GetAdditionPropertyMappings(
            T value,
            OperationContext operationContext)
        {
            if (_additionalProperties != null)
            {
                return _additionalProperties;
            }

            List<AdditionalPropertyMetadata> additionalProperties;

            lock (_syncLock)
            {
                // Check the mappings again to protect against race conditions on the lock
                if (_additionalProperties != null)
                {
                    return _additionalProperties;
                }

                additionalProperties = ResolvePropertyMappings(value, operationContext);

                _additionalProperties = additionalProperties;
            }

            return additionalProperties;
        }

        /// <summary>
        ///     Resolves the additional property mappings.
        /// </summary>
        /// <param name="value">The value.</param>
        /// <param name="operationContext">The operation context.</param>
        /// <returns>
        ///     The additional properties.
        /// </returns>
        private static List<AdditionalPropertyMetadata> ResolvePropertyMappings(
            T value,
            OperationContext operationContext)
        {
            var storageSupportedProperties = TableEntity.WriteUserObject(value, operationContext);
            var objectProperties = value.GetType().GetProperties();
            var infrastructureProperties = typeof(ITableEntity).GetProperties();
            var missingProperties =
                objectProperties.Where(
                    objectProperty => storageSupportedProperties.ContainsKey(objectProperty.Name) == false);

            var additionalProperties = missingProperties.Select(
                x => new AdditionalPropertyMetadata
                {
                    IsInfrastructureProperty = infrastructureProperties.Any(y => x.Name == y.Name),
                    PropertyMetadata = x
                });

            return additionalProperties.ToList();
        }

        /// <summary>
        ///     Reads the additional properties.
        /// </summary>
        /// <param name="properties">The properties.</param>
        /// <param name="additionalMappings">The additional mappings.</param>
        /// <exception cref="System.InvalidOperationException">
        ///     The ITableEntity interface now defines a property that is not
        ///     supported by this adapter.
        /// </exception>
        private void ReadAdditionalProperties(
            IDictionary<string, EntityProperty> properties,
            IEnumerable<AdditionalPropertyMetadata> additionalMappings)
        {
            // Populate the properties missing from ReadUserObject
            foreach (var additionalMapping in additionalMappings)
            {
                if (additionalMapping.IsInfrastructureProperty)
                {
                    // We don't want to use a string conversion here
                    // Explicitly map the types across
                    if (additionalMapping.PropertyMetadata.Name == "Timestamp" &&
                        additionalMapping.PropertyMetadata.PropertyType == typeof(DateTimeOffset))
                    {
                        // This is the timestamp property
                        additionalMapping.PropertyMetadata.SetValue(Value, Timestamp);
                    }
                    else if (additionalMapping.PropertyMetadata.Name == "ETag" &&
                             additionalMapping.PropertyMetadata.PropertyType == typeof(string))
                    {
                        // This is the timestamp property
                        additionalMapping.PropertyMetadata.SetValue(Value, ETag);
                    }
                    else if (additionalMapping.PropertyMetadata.Name == "PartitionKey" &&
                             additionalMapping.PropertyMetadata.PropertyType == typeof(string))
                    {
                        // This is the timestamp property
                        additionalMapping.PropertyMetadata.SetValue(Value, PartitionKey);
                    }
                    else if (additionalMapping.PropertyMetadata.Name == "RowKey" &&
                             additionalMapping.PropertyMetadata.PropertyType == typeof(string))
                    {
                        // This is the timestamp property
                        additionalMapping.PropertyMetadata.SetValue(Value, RowKey);
                    }
                    else
                    {
                        const string UnsupportedPropertyMessage =
                            "The {0} interface now defines a property {1} which is not supported by this adapter.";

                        var message = string.Format(
                            CultureInfo.CurrentCulture,
                            UnsupportedPropertyMessage,
                            typeof(ITableEntity).FullName,
                            additionalMapping.PropertyMetadata.Name);

                        throw new InvalidOperationException(message);
                    }
                }
                else if (properties.ContainsKey(additionalMapping.PropertyMetadata.Name))
                {
                    // This is a property that has an unsupport type
                    // Use a converter to resolve and apply the correct value
                    var propertyValue = properties[additionalMapping.PropertyMetadata.Name];
                    var converter = TypeDescriptor.GetConverter(additionalMapping.PropertyMetadata.PropertyType);
                    var convertedValue = converter.ConvertFromInvariantString(propertyValue.StringValue);

                    additionalMapping.PropertyMetadata.SetValue(Value, convertedValue);
                }

                // The else case here is that the model now contains a property that was not originally stored when the entity was last written
                // This property will assume the default value for its type
            }
        }

        /// <summary>
        ///     Writes the additional properties.
        /// </summary>
        /// <param name="additionalMappings">The additional mappings.</param>
        /// <param name="properties">The properties.</param>
        private void WriteAdditionalProperties(
            IEnumerable<AdditionalPropertyMetadata> additionalMappings,
            IDictionary<string, EntityProperty> properties)
        {
            // Populate the properties missing from WriteUserObject
            foreach (var additionalMapping in additionalMappings)
            {
                if (additionalMapping.IsInfrastructureProperty)
                {
                    // We need to let the storage mechanism handle the write of the infrastructure properties
                    continue;
                }

                var propertyValue = additionalMapping.PropertyMetadata.GetValue(Value);
                var converter = TypeDescriptor.GetConverter(additionalMapping.PropertyMetadata.PropertyType);
                var convertedValue = converter.ConvertToInvariantString(propertyValue);

                properties[additionalMapping.PropertyMetadata.Name] =
                    EntityProperty.GeneratePropertyForString(convertedValue);
            }
        }

        /// <inheritdoc />
        public string ETag
        {
            get;
            set;
        }

        /// <inheritdoc />
        public string PartitionKey
        {
            get
            {
                if (_partitionKey == null)
                {
                    _partitionKey = BuildPartitionKey();
                }

                return _partitionKey;
            }

            set
            {
                _partitionKey = value;
            }
        }

        /// <inheritdoc />
        public string RowKey
        {
            get
            {
                if (_rowKey == null)
                {
                    _rowKey = BuildRowKey();
                }

                return _rowKey;
            }

            set
            {
                _rowKey = value;
            }
        }

        /// <inheritdoc />
        public DateTimeOffset Timestamp
        {
            get;
            set;
        }

        /// <summary>
        ///     Gets the value managed by the adapter.
        /// </summary>
        /// <value>
        ///     The value.
        /// </value>
        public T Value
        {
            get
            {
                return _value;
            }
        }

        /// <summary>
        ///     The <see cref="AdditionalPropertyMetadata" />
        ///     provides information about additional storage properties for an entity type.
        /// </summary>
        private struct AdditionalPropertyMetadata
        {
            /// <summary>
            ///     Gets or sets a value indicating whether this instance is infrastructure property.
            /// </summary>
            /// <value>
            ///     <c>true</c> if this instance is infrastructure property; otherwise, <c>false</c>.
            /// </value>
            public bool IsInfrastructureProperty
            {
                get;
                set;
            }

            /// <summary>
            ///     Gets or sets the property metadata.
            /// </summary>
            /// <value>
            ///     The property metadata.
            /// </value>
            public PropertyInfo PropertyMetadata
            {
                get;
                set;
            }
        }
    }
}

Azure Table Storage Adapter Using Reserved Properties

I posted earlier this year about an adapter class to be the bridge between ITableEntity and a domain model class when using Azure table storage. I hit a problem with this today when I was dealing with a model class that had a Timestamp property.

While the adapter class is intended to encapsulate ITableEntity to prevent it leaking from the data layer, this particular model actually wanted to expose the Timestamp value from ITableEntity. This didn’t go down too well.

Microsoft.WindowsAzure.Storage.StorageException: An incompatible primitive type 'Edm.String[Nullable=True]' was found for an item that was expected to be of type 'Edm.DateTime[Nullable=False]'. ---> Microsoft.Data.OData.ODataException: An incompatible primitive type 'Edm.String[Nullable=True]' was found for an item that was expected to be of type 'Edm.DateTime[Nullable=False]'

The simple fix to the adapter class is to filter out ITableEntity properties from the custom property mapping.

private static List<PropertyInfo> ResolvePropertyMappings(
    T value,
    IDictionary<string, EntityProperty> properties)
{
    var objectProperties = value.GetType().GetProperties();
    var infrastructureProperties = typeof(ITableEntity).GetProperties();
    var missingProperties =
        objectProperties.Where(objectProperty => properties.ContainsKey(objectProperty.Name) == false);
    var additionalProperties =
        missingProperties.Where(x => infrastructureProperties.Any(y => x.Name == y.Name) == false);

    return additionalProperties.ToList();
}

This makes sure that the Timestamp property is not included in the properties to write to table storage which leads to the StorageException. This still leaves the model returned from table storage without the Timestamp property being assigned. The adapter class implementation can fix this with the following code.

/// <inheritdoc />
protected override void ReadValues(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
{
    base.ReadValues(properties, operationContext);

    // Write the timestamp property into the same property on the entity
    Value.Timestamp = Timestamp;
}

Now we are up and running again.

Azure Table Services Unexpected response code for operation

I’ve just hit a StorageException with Azure Table Services that does not occur in the local emulator.

Unexpected response code for operation : 5

The only hit on the net for this error is here. That post indicates that invalid characters are in either the PartitionKey or RowKey values. I know that this is not the case for my data set. It turns out this failure also occurs for invalid data in the fields. In my scenario I had a null value pushed into a DateTime property. The result of this is a DateTime value that will not be accepted by ATS.

Using the EntityAdapter for Azure Table Storage

I got a request for an example of how to use the EntityAdapter class I previously posted about. Here is an example of a PersonAdapter.

public enum Gender
{
    Unspecified = 0,

    Female,

    Mail
}

public class Person
{
    public string Email
    {
        get;
        set;
    }

    public String FirstName
    {
        get;
        set;
    }

    public Gender Gender
    {
        get;
        set;
    }

    public string LastName
    {
        get;
        set;
    }
}

public class PersonAdapter : EntityAdapter<Person>
{
    public PersonAdapter()
    {
    }

    public PersonAdapter(Person person) : base(person)
    {
    }

    public static string BuildPartitionKey(string email)
    {
        var index = email.IndexOf("@");

        return email.Substring(index);
    }

    public static string BuildRowKey(string email)
    {
        var index = email.IndexOf("@");

        return email.Substring(0, index);
    }

    protected override string BuildPartitionKey()
    {
        return BuildPartitionKey(Value.Email);
    }

    protected override string BuildRowKey()
    {
        return BuildRowKey(Value.Email);
    }
}

This adapter can be used to read and write entities to ATS like the following.

public async Task<IEnumerable<Person>> ReadDomainUsersAsync(string domain)
{
    var storageAccount = CloudStorageAccount.Parse("YourConnectionString");

    // Create the table client
    var client = storageAccount.CreateCloudTableClient();

    var table = client.GetTableReference("People");

    var tableExists = await table.ExistsAsync().ConfigureAwait(false);

    if (tableExists == false)
    {
        // No items could possibly be returned
        return new List<Person>();
    }

    var partitionKey = PersonAdapter.BuildPartitionKey(domain);
    var partitionKeyFilter = TableQuery.GenerateFilterCondition(
        "PartitionKey",
        QueryComparisons.Equal,
        partitionKey);

    var query = new TableQuery<PersonAdapter>().Where(partitionKeyFilter);

    var results = table.ExecuteQuery(query);

    if (results == null)
    {
        return new List<Person>();
    }

    return results.Select(x => x.Value);
}

public async Task WritePersonAsync(Person person)
{
    var storageAccount = CloudStorageAccount.Parse("YourConnectionString");

    // Create the table client
    var client = storageAccount.CreateCloudTableClient();

    var table = client.GetTableReference("People");

    await table.CreateIfNotExistsAsync().ConfigureAwait(false);

    var adapter = new PersonAdapter(person);
    var operation = TableOperation.InsertOrReplace(adapter);

    table.Execute(operation);
}

Hope this helps.

Azure EntityAdapter with unsupported table types

I recently posted about an EntityAdapter class that can be the bridge between an ITableEntity that Azure table services requires and a domain model class that you actually want to use. I found an issue with this implementation where TableEntity.ReadUserObject and TableEntity.WriteUserObject that the EntityAdapter rely on will only support mapping properties for types that are intrinsically supported by ATS. This means your domain model will end up with default values for properties that are not String, Binary, Boolean, DateTime, DateTimeOffset, Double, Guid, Int32 or Int64.

I hit this issue because I started working with a model class that exposes an enum property. The integration tests failed because the read of the entity using the adapter returned the default enum value for the property rather than the one I attempted to write to the table. I have updated the EntityAdapter class to cater for this by using reflection and type converters to fill in the gaps.

The class now looks like the following:

namespace MySystem.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    using System.Reflection;
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;

    /// <summary>
    ///     The <see cref="EntityAdapter{T}" />
    ///     class provides the base adapter implementation for reading and writing a POCO class with Azure Table Storage.
    /// </summary>
    /// <typeparam name="T">
    ///     The type of value.
    /// </typeparam>
    [CLSCompliant(false)]
    public abstract class EntityAdapter<T> : ITableEntity where T : class, new()
    {
        /// <summary>
        ///     The synchronization lock.
        /// </summary>
        /// <remarks>A dictionary is not required here because the static will have a different value for each generic type.</remarks>
        private static readonly Object _syncLock = new Object();

        /// <summary>
        ///     The additional properties to map for types.
        /// </summary>
        /// <remarks>A dictionary is not required here because the static will have a different value for each generic type.</remarks>
        private static List<PropertyInfo> _additionalProperties;

        /// <summary>
        ///     The partition key
        /// </summary>
        private string _partitionKey;

        /// <summary>
        ///     The row key
        /// </summary>
        private string _rowKey;

        /// <summary>
        ///     The entity value.
        /// </summary>
        private T _value;

        /// <summary>
        ///     Initializes a new instance of the <see cref="EntityAdapter{T}" /> class.
        /// </summary>
        protected EntityAdapter()
        {
        }

        /// <summary>
        ///     Initializes a new instance of the <see cref="EntityAdapter{T}" /> class.
        /// </summary>
        /// <param name="value">
        ///     The value.
        /// </param>
        protected EntityAdapter(T value)
        {
            Guard.That(value, "value").IsNotNull();

            _value = value;
        }

        /// <inheritdoc />
        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            _value = new T();

            TableEntity.ReadUserObject(Value, properties, operationContext);

            var additionalMappings = GetAdditionPropertyMappings(Value, properties);

            if (additionalMappings.Count > 0)
            {
                // Populate the properties missing from ReadUserObject
                foreach (var additionalMapping in additionalMappings)
                {
                    if (properties.ContainsKey(additionalMapping.Name) == false)
                    {
                        // We will let the object assign its default value for that property
                        continue;
                    }

                    var propertyValue = properties[additionalMapping.Name];
                    var converter = TypeDescriptor.GetConverter(additionalMapping.PropertyType);
                    var convertedValue = converter.ConvertFromInvariantString(propertyValue.StringValue);

                    additionalMapping.SetValue(Value, convertedValue);
                }
            }

            ReadValues(properties, operationContext);
        }

        /// <inheritdoc />
        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            var properties = TableEntity.WriteUserObject(Value, operationContext);

            var additionalMappings = GetAdditionPropertyMappings(Value, properties);

            if (additionalMappings.Count > 0)
            {
                // Populate the properties missing from WriteUserObject
                foreach (var additionalMapping in additionalMappings)
                {
                    var propertyValue = additionalMapping.GetValue(Value);
                    var converter = TypeDescriptor.GetConverter(additionalMapping.PropertyType);
                    var convertedValue = converter.ConvertToInvariantString(propertyValue);

                    properties[additionalMapping.Name] = EntityProperty.GeneratePropertyForString(convertedValue);
                }
            }

            WriteValues(properties, operationContext);

            return properties;
        }

        /// <summary>
        ///     Builds the entity partition key.
        /// </summary>
        /// <returns>
        ///     The partition key of the entity.
        /// </returns>
        protected abstract string BuildPartitionKey();

        /// <summary>
        ///     Builds the entity row key.
        /// </summary>
        /// <returns>
        ///     The <see cref="string" />.
        /// </returns>
        protected abstract string BuildRowKey();

        /// <summary>
        ///     Reads the values from the specified properties.
        /// </summary>
        /// <param name="properties">
        ///     The properties of the entity.
        /// </param>
        /// <param name="operationContext">
        ///     The operation context.
        /// </param>
        protected virtual void ReadValues(
            IDictionary<string, EntityProperty> properties,
            OperationContext operationContext)
        {
        }

        /// <summary>
        ///     Writes the entity values to the specified properties.
        /// </summary>
        /// <param name="properties">
        ///     The properties.
        /// </param>
        /// <param name="operationContext">
        ///     The operation context.
        /// </param>
        protected virtual void WriteValues(
            IDictionary<string, EntityProperty> properties,
            OperationContext operationContext)
        {
        }

        /// <summary>
        ///     Gets the additional property mappings.
        /// </summary>
        /// <param name="value">The value.</param>
        /// <param name="properties">The mapped properties.</param>
        /// <returns>
        ///     The additional property mappings.
        /// </returns>
        private static List<PropertyInfo> GetAdditionPropertyMappings(
            T value,
            IDictionary<string, EntityProperty> properties)
        {
            if (_additionalProperties != null)
            {
                return _additionalProperties;
            }

            List<PropertyInfo> additionalProperties;

            lock (_syncLock)
            {
                // Check the mappings again to protect against race conditions on the lock
                if (_additionalProperties != null)
                {
                    return _additionalProperties;
                }

                additionalProperties = ResolvePropertyMappings(value, properties);

                _additionalProperties = additionalProperties;
            }

            return additionalProperties;
        }

        /// <summary>
        ///     Resolves the additional property mappings.
        /// </summary>
        /// <param name="value">The value.</param>
        /// <param name="properties">The properties.</param>
        /// <returns>The additional properties.</returns>
        private static List<PropertyInfo> ResolvePropertyMappings(
            T value,
            IDictionary<string, EntityProperty> properties)
        {
            var objectProperties = value.GetType().GetProperties();

            return
                objectProperties.Where(objectProperty => properties.ContainsKey(objectProperty.Name) == false).ToList();
        }

        /// <inheritdoc />
        public string ETag
        {
            get;
            set;
        }

        /// <inheritdoc />
        public string PartitionKey
        {
            get
            {
                if (_partitionKey == null)
                {
                    _partitionKey = BuildPartitionKey();
                }

                return _partitionKey;
            }

            set
            {
                _partitionKey = value;
            }
        }

        /// <inheritdoc />
        public string RowKey
        {
            get
            {
                if (_rowKey == null)
                {
                    _rowKey = BuildRowKey();
                }

                return _rowKey;
            }

            set
            {
                _rowKey = value;
            }
        }

        /// <inheritdoc />
        public DateTimeOffset Timestamp
        {
            get;
            set;
        }

        /// <summary>
        ///     Gets the value managed by the adapter.
        /// </summary>
        /// <value>
        ///     The value.
        /// </value>
        public T Value
        {
            get
            {
                return _value;
            }
        }
    }
}

Code check in procedure

I’ve been running this check in procedure for several years with my development teams. The intention here is for developers to get their code into an acceptable state before submitting it to source control. It attempts to avoid some classic bad habits around source control, such as:

  • Check in changes at the end of each day
  • Missing changeset comments
  • Using the build system as point of compiler/quality validation
  • Big bang changesets
  • Cross purpose changesets

Changeset Contents

Changesets need to be related to a particular set of related changes. A changeset should not include changes or functionality from unrelated pieces of work. This makes reviewing changesets and work tracking very difficult. If you do need to work on unrelated pieces of work, shelve the prior work (undoing local changes) and start working on the new piece of work. Once a piece of work is checked in according to the procedure below, the previous shelveset can be brought back down to your local workspace and you can continue to work on it.

Check In Procedure

The following set of actions must be taken in order to check in changes to source control.

  1. Pre- Check-in
    • Code is functioning correctly and to spec.
    • All code comments are correct and well formatted
    • Code has been cleaned up and is consistent to team standards
  2. Run get latest on the solution
    • Fix any merge issues
  3. Undo any files that haven't changed - see Quick tip for undoing unchanged TFS checkouts
  4. Switch to Release build
  5. Rebuild solution (not just build)
    • Fix any compilation errors
    • Fix any compilation warnings that can be addressed
  6. Deploy database projects to local machine as required
  7. Run all tests
    • They must all pass
  8. Write a comment that describes the changeset
  9. Assign a work item to the changeset
  10. Raise a code review request if the changeset contains code changes
    • Minor changesets that do not change code or have any functional change do not require a review
  11. Verify that no other check ins have occurred since doing #1
  12. Check in
  13. Wait for build to complete (you can do other work during this process)
    • Verify build successful or investigate any failures

Using WinMerge with VS2013

I’ve finally gotten around to adding some reg files for using WinMerge with VS2013. You can download them from the bottom of my Using WinMerge with TFS post. These reg files will configure VS2013 to use WinMerge for TFS diff/merge operations (no Visual Studio restart is required).

Entity Adapter for Azure Table Storage

When working with Azure Table Storage you will ultimately have to deal with ITableEntity. My solution to date has been to create a class that derives from my model class and then implement ITableEntity. This derived class can them provide the plumbing for table storage while allowing the layer to return the correct model type.

The problem here is that ITableEntity is still leaking outside of the Azure DAL even though it is represented as the expected type. While I don’t like my classes leaking knowledge inappropriately to higher layers I also don’t like plumbing logic that converts between two model classes that are logically the same (although tools like AutoMapper do take some of this pain away).

Using an entity adapter is a really clean way to get your cake and eat it. The original code of this concept was posted by the Windows Azure Storage Team (you can read it here). I’ve taken that code and tweaked it slightly to make it a little more reusable.

namespace MyProject.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;

    internal abstract class EntityAdapter<T> : ITableEntity where T : class, new()
    {
        private string _partitionKey;

        private string _rowKey;

        private T _value;

        protected EntityAdapter()
        {
        }

        protected EntityAdapter(T value)
        {
            Guard.That(value, "value").IsNotNull();

            _value = value;
        }

        /// <inheritdoc />
        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            _value = new T();

            TableEntity.ReadUserObject(_value, properties, operationContext);

            ReadValues(properties, operationContext);
        }

        /// <inheritdoc />
        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            var properties = TableEntity.WriteUserObject(Value, operationContext);

            WriteValues(properties, operationContext);

            return properties;
        }

        protected abstract string BuildPartitionKey();

        protected abstract string BuildRowKey();

        protected virtual void ReadValues(
            IDictionary<string, EntityProperty> properties, 
            OperationContext operationContext)
        {
        }

        protected virtual void WriteValues(
            IDictionary<string, EntityProperty> properties, 
            OperationContext operationContext)
        {
        }

        /// <inheritdoc />
        public string ETag
        {
            get;
            set;
        }

        /// <inheritdoc />
        public string PartitionKey
        {
            get
            {
                if (_partitionKey == null)
                {
                    _partitionKey = BuildPartitionKey();
                }

                return _partitionKey;
            }

            set
            {
                _partitionKey = value;
            }
        }

        /// <inheritdoc />
        public string RowKey
        {
            get
            {
                if (_rowKey == null)
                {
                    _rowKey = BuildRowKey();
                }

                return _rowKey;
            }

            set
            {
                _rowKey = value;
            }
        }

        /// <inheritdoc />
        public DateTimeOffset Timestamp
        {
            get;
            set;
        }

        public T Value
        {
            get
            {
                return _value;
            }
        }
    }
}

This class has the flexibility to build a partition and row key for simple adapter usage and then be extended to override ReadValues and WriteValues to store additional metadata with your value for more complex scenarios. To write your value to table storage you simply wrap it in a new instance of your adapter which will pass the value down to the appropriate base constructor. Reading the entity from table storage will then select the Value property on the way back out.

This method allows for the adapter to be an internal bridge between your model class and table storage. The type being returned from the DAL is now POCO while table storage has an ITableEntity that it can use.

Writing batches to Azure Table Storage

Writing records to Azure Table Storage in batches is handy when you are writing a lot of records because it reduces the transaction cost. There are restrictions however. The batch must:

  • Be no more than 100 records
  • Have the same partition key
  • Have unique row keys

Writing batches is easy, even adhering to the above rules. The problem however is that it can start to result in a lot of boilerplate style code. I created a batch writer class to abstract this logic away.

namespace MyProject.Server.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.Globalization;
    using System.Threading.Tasks;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;
    using MyProject.Server.DataAccess.Azure.Properties;

    /// <summary>
    ///     The <see cref="TableBatchWriter" />
    ///     class manages the process of writing a batch of entitites to a <see cref="TableBatchOperation" /> instance.
    /// </summary>
    [CLSCompliant(false)]
    public class TableBatchWriter
    {
        /// <summary>
        ///     The maximum ats table batch size.
        /// </summary>
        private const int MaxAtsTableBatchSize = 100;

        /// <summary>
        ///     The batch tasks.
        /// </summary>
        private readonly List<Task> _batchTasks;

        /// <summary>
        ///     The table to write the batch to.
        /// </summary>
        private readonly CloudTable _table;

        /// <summary>
        ///     The current operation.
        /// </summary>
        private TableBatchOperation _currentOperation;

        /// <summary>
        ///     The partition key for the current batch.
        /// </summary>
        private string _currentPartitionKey;

        /// <summary>
        ///     The row keys for the current partition key.
        /// </summary>
        private List<string> _partitionRowKeys;

        /// <summary>
        ///     The total items written to the table.
        /// </summary>
        private int _totalItems;

        /// <summary>
        ///     Initializes a new instance of the <see cref="TableBatchWriter" /> class.
        /// </summary>
        public TableBatchWriter(CloudTable table)
        {
            Guard.That(() => table).IsNotNull();

            _table = table;

            _batchTasks = new List<Task>();
            _partitionRowKeys = new List<string>();
            _currentOperation = new TableBatchOperation();
        }

        /// <summary>
        ///     Adds the specified entity.
        /// </summary>
        /// <param name="entity">The entity.</param>
        /// <exception cref="System.InvalidOperationException">The entity has a row key conflict in the current batch.</exception>
        public void Add(ITableEntity entity)
        {
            Guard.That(() => entity).IsNotNull();

            if (Count == 0)
            {
                // This is the first entry
                _currentPartitionKey = entity.PartitionKey;
            }
            else if (entity.PartitionKey != _currentPartitionKey)
            {
                Debug.WriteLine(
                    "PartitionKey changed from '{0}' to '{1}' at index {2}. Writing batch of {3} items to table storage.",
                    _currentPartitionKey,
                    entity.PartitionKey,
                    _totalItems - 1,
                    Count);

                WriteBatch();

                _partitionRowKeys = new List<string>();
                _currentPartitionKey = entity.PartitionKey;
            }
            else if (_partitionRowKeys.Contains(entity.RowKey))
            {
                // There are existing items in the batch and we haven't changed partition key
                var message = string.Format(
                    CultureInfo.CurrentCulture,
                    Resources.TableBatchWriter_RowKeyConflict,
                    _currentPartitionKey,
                    entity.RowKey);

                throw new InvalidOperationException(message);
            }

            _partitionRowKeys.Add(entity.RowKey);
            _currentOperation.InsertOrReplace(entity);
            _totalItems++;

            if (Count == MaxAtsTableBatchSize)
            {
                Debug.WriteLine(
                    "Batch count of {0} has been reached at index {1}. Writing batch to table storage.",
                    MaxAtsTableBatchSize,
                    _totalItems - 1);

                WriteBatch();
            }
        }

        /// <summary>
        ///     Adds the items.
        /// </summary>
        /// <param name="items">The items.</param>
        public void AddItems(IEnumerable<ITableEntity> items)
        {
            Guard.That(() => items).IsNotNull();

            foreach (var item in items)
            {
                Add(item);
            }
        }

        /// <summary>
        ///     Executes the batch writing asynchronously.
        /// </summary>
        /// <returns>A <see cref="Task" /> value.</returns>
        public async Task ExecuteAsync()
        {
            // Check if there is a final batch that has not been actioned yet
            if (Count > 0)
            {
                Debug.WriteLine("Writing final batch of {0} entries to table storage.", Count);

                WriteBatch();
            }

            if (_batchTasks.Count == 0)
            {
                return;
            }

            await Task.WhenAll(_batchTasks).ConfigureAwait(false);

            // Clean up resources
            _batchTasks.Clear();
            _partitionRowKeys = new List<string>();
            _currentOperation = new TableBatchOperation();
        }

        private void WriteBatch()
        {
            var task = _table.ExecuteBatchAsync(_currentOperation);

            _batchTasks.Add(task);

            _currentOperation = new TableBatchOperation();
        }

        /// <summary>
        ///     Gets the count.
        /// </summary>
        /// <value>
        ///     The count.
        /// </value>
        public int Count
        {
            get
            {
                return _currentOperation.Count;
            }
        }
    }
}

With this class you can add as many entities as you like and then wait on ExecuteAsync to finish off the work. The only issue that this class doesn’t cover is where you have a RowKey conflict that happens to fall across batches. Not much you can do about that though.

Rory Primrose | Erprobt By Nikkie
blog comments powered by Disqus