Azure Table Storage Adapter Using Reserved Properties

I posted earlier this year about an adapter class to be the bridge between ITableEntity and a domain model class when using Azure table storage. I hit a problem with this today when I was dealing with a model class that had a Timestamp property.

While the adapter class is intended to encapsulate ITableEntity to prevent it leaking from the data layer, this particular model actually wanted to expose the Timestamp value from ITableEntity. This didn’t go down too well.

Microsoft.WindowsAzure.Storage.StorageException: An incompatible primitive type 'Edm.String[Nullable=True]' was found for an item that was expected to be of type 'Edm.DateTime[Nullable=False]'. ---> Microsoft.Data.OData.ODataException: An incompatible primitive type 'Edm.String[Nullable=True]' was found for an item that was expected to be of type 'Edm.DateTime[Nullable=False]'

The simple fix to the adapter class is to filter out ITableEntity properties from the custom property mapping.

private static List<PropertyInfo> ResolvePropertyMappings(
    T value,
    IDictionary<string, EntityProperty> properties)
{
    var objectProperties = value.GetType().GetProperties();
    var infrastructureProperties = typeof(ITableEntity).GetProperties();
    var missingProperties =
        objectProperties.Where(objectProperty => properties.ContainsKey(objectProperty.Name) == false);
    var additionalProperties =
        missingProperties.Where(x => infrastructureProperties.Any(y => x.Name == y.Name) == false);

    return additionalProperties.ToList();
}

This makes sure that the Timestamp property is not included in the properties to write to table storage which leads to the StorageException. This still leaves the model returned from table storage without the Timestamp property being assigned. The adapter class implementation can fix this with the following code.

/// <inheritdoc />
protected override void ReadValues(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
{
    base.ReadValues(properties, operationContext);

    // Write the timestamp property into the same property on the entity
    Value.Timestamp = Timestamp;
}

Now we are up and running again.

Azure Table Services Unexpected response code for operation

I’ve just hit a StorageException with Azure Table Services that does not occur in the local emulator.

Unexpected response code for operation : 5

The only hit on the net for this error is here. That post indicates that invalid characters are in either the PartitionKey or RowKey values. I know that this is not the case for my data set. It turns out this failure also occurs for invalid data in the fields. In my scenario I had a null value pushed into a DateTime property. The result of this is a DateTime value that will not be accepted by ATS.

Using the EntityAdapter for Azure Table Storage

I got a request for an example of how to use the EntityAdapter class I previously posted about. Here is an example of a PersonAdapter.

public enum Gender
{
    Unspecified = 0,

    Female,

    Mail
}

public class Person
{
    public string Email
    {
        get;
        set;
    }

    public String FirstName
    {
        get;
        set;
    }

    public Gender Gender
    {
        get;
        set;
    }

    public string LastName
    {
        get;
        set;
    }
}

public class PersonAdapter : EntityAdapter<Person>
{
    public PersonAdapter()
    {
    }

    public PersonAdapter(Person person) : base(person)
    {
    }

    public static string BuildPartitionKey(string email)
    {
        var index = email.IndexOf("@");

        return email.Substring(index);
    }

    public static string BuildRowKey(string email)
    {
        var index = email.IndexOf("@");

        return email.Substring(0, index);
    }

    protected override string BuildPartitionKey()
    {
        return BuildPartitionKey(Value.Email);
    }

    protected override string BuildRowKey()
    {
        return BuildRowKey(Value.Email);
    }
}

This adapter can be used to read and write entities to ATS like the following.

public async Task<IEnumerable<Person>> ReadDomainUsersAsync(string domain)
{
    var storageAccount = CloudStorageAccount.Parse("YourConnectionString");

    // Create the table client
    var client = storageAccount.CreateCloudTableClient();

    var table = client.GetTableReference("People");

    var tableExists = await table.ExistsAsync().ConfigureAwait(false);

    if (tableExists == false)
    {
        // No items could possibly be returned
        return new List<Person>();
    }

    var partitionKey = PersonAdapter.BuildPartitionKey(domain);
    var partitionKeyFilter = TableQuery.GenerateFilterCondition(
        "PartitionKey",
        QueryComparisons.Equal,
        partitionKey);

    var query = new TableQuery<PersonAdapter>().Where(partitionKeyFilter);

    var results = table.ExecuteQuery(query);

    if (results == null)
    {
        return new List<Person>();
    }

    return results.Select(x => x.Value);
}

public async Task WritePersonAsync(Person person)
{
    var storageAccount = CloudStorageAccount.Parse("YourConnectionString");

    // Create the table client
    var client = storageAccount.CreateCloudTableClient();

    var table = client.GetTableReference("People");

    await table.CreateIfNotExistsAsync().ConfigureAwait(false);

    var adapter = new PersonAdapter(person);
    var operation = TableOperation.InsertOrReplace(adapter);

    table.Execute(operation);
}

Hope this helps.

Azure EntityAdapter with unsupported table types

I recently posted about an EntityAdapter class that can be the bridge between an ITableEntity that Azure table services requires and a domain model class that you actually want to use. I found an issue with this implementation where TableEntity.ReadUserObject and TableEntity.WriteUserObject that the EntityAdapter rely on will only support mapping properties for types that are intrinsically supported by ATS. This means your domain model will end up with default values for properties that are not String, Binary, Boolean, DateTime, DateTimeOffset, Double, Guid, Int32 or Int64.

I hit this issue because I started working with a model class that exposes an enum property. The integration tests failed because the read of the entity using the adapter returned the default enum value for the property rather than the one I attempted to write to the table. I have updated the EntityAdapter class to cater for this by using reflection and type converters to fill in the gaps.

The class now looks like the following:

namespace MySystem.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Linq;
    using System.Reflection;
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;

    /// <summary>
    ///     The <see cref="EntityAdapter{T}" />
    ///     class provides the base adapter implementation for reading and writing a POCO class with Azure Table Storage.
    /// </summary>
    /// <typeparam name="T">
    ///     The type of value.
    /// </typeparam>
    [CLSCompliant(false)]
    public abstract class EntityAdapter<T> : ITableEntity where T : class, new()
    {
        /// <summary>
        ///     The synchronization lock.
        /// </summary>
        /// <remarks>A dictionary is not required here because the static will have a different value for each generic type.</remarks>
        private static readonly Object _syncLock = new Object();

        /// <summary>
        ///     The additional properties to map for types.
        /// </summary>
        /// <remarks>A dictionary is not required here because the static will have a different value for each generic type.</remarks>
        private static List<PropertyInfo> _additionalProperties;

        /// <summary>
        ///     The partition key
        /// </summary>
        private string _partitionKey;

        /// <summary>
        ///     The row key
        /// </summary>
        private string _rowKey;

        /// <summary>
        ///     The entity value.
        /// </summary>
        private T _value;

        /// <summary>
        ///     Initializes a new instance of the <see cref="EntityAdapter{T}" /> class.
        /// </summary>
        protected EntityAdapter()
        {
        }

        /// <summary>
        ///     Initializes a new instance of the <see cref="EntityAdapter{T}" /> class.
        /// </summary>
        /// <param name="value">
        ///     The value.
        /// </param>
        protected EntityAdapter(T value)
        {
            Guard.That(value, "value").IsNotNull();

            _value = value;
        }

        /// <inheritdoc />
        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            _value = new T();

            TableEntity.ReadUserObject(Value, properties, operationContext);

            var additionalMappings = GetAdditionPropertyMappings(Value, properties);

            if (additionalMappings.Count > 0)
            {
                // Populate the properties missing from ReadUserObject
                foreach (var additionalMapping in additionalMappings)
                {
                    if (properties.ContainsKey(additionalMapping.Name) == false)
                    {
                        // We will let the object assign its default value for that property
                        continue;
                    }

                    var propertyValue = properties[additionalMapping.Name];
                    var converter = TypeDescriptor.GetConverter(additionalMapping.PropertyType);
                    var convertedValue = converter.ConvertFromInvariantString(propertyValue.StringValue);

                    additionalMapping.SetValue(Value, convertedValue);
                }
            }

            ReadValues(properties, operationContext);
        }

        /// <inheritdoc />
        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            var properties = TableEntity.WriteUserObject(Value, operationContext);

            var additionalMappings = GetAdditionPropertyMappings(Value, properties);

            if (additionalMappings.Count > 0)
            {
                // Populate the properties missing from WriteUserObject
                foreach (var additionalMapping in additionalMappings)
                {
                    var propertyValue = additionalMapping.GetValue(Value);
                    var converter = TypeDescriptor.GetConverter(additionalMapping.PropertyType);
                    var convertedValue = converter.ConvertToInvariantString(propertyValue);

                    properties[additionalMapping.Name] = EntityProperty.GeneratePropertyForString(convertedValue);
                }
            }

            WriteValues(properties, operationContext);

            return properties;
        }

        /// <summary>
        ///     Builds the entity partition key.
        /// </summary>
        /// <returns>
        ///     The partition key of the entity.
        /// </returns>
        protected abstract string BuildPartitionKey();

        /// <summary>
        ///     Builds the entity row key.
        /// </summary>
        /// <returns>
        ///     The <see cref="string" />.
        /// </returns>
        protected abstract string BuildRowKey();

        /// <summary>
        ///     Reads the values from the specified properties.
        /// </summary>
        /// <param name="properties">
        ///     The properties of the entity.
        /// </param>
        /// <param name="operationContext">
        ///     The operation context.
        /// </param>
        protected virtual void ReadValues(
            IDictionary<string, EntityProperty> properties,
            OperationContext operationContext)
        {
        }

        /// <summary>
        ///     Writes the entity values to the specified properties.
        /// </summary>
        /// <param name="properties">
        ///     The properties.
        /// </param>
        /// <param name="operationContext">
        ///     The operation context.
        /// </param>
        protected virtual void WriteValues(
            IDictionary<string, EntityProperty> properties,
            OperationContext operationContext)
        {
        }

        /// <summary>
        ///     Gets the additional property mappings.
        /// </summary>
        /// <param name="value">The value.</param>
        /// <param name="properties">The mapped properties.</param>
        /// <returns>
        ///     The additional property mappings.
        /// </returns>
        private static List<PropertyInfo> GetAdditionPropertyMappings(
            T value,
            IDictionary<string, EntityProperty> properties)
        {
            if (_additionalProperties != null)
            {
                return _additionalProperties;
            }

            List<PropertyInfo> additionalProperties;

            lock (_syncLock)
            {
                // Check the mappings again to protect against race conditions on the lock
                if (_additionalProperties != null)
                {
                    return _additionalProperties;
                }

                additionalProperties = ResolvePropertyMappings(value, properties);

                _additionalProperties = additionalProperties;
            }

            return additionalProperties;
        }

        /// <summary>
        ///     Resolves the additional property mappings.
        /// </summary>
        /// <param name="value">The value.</param>
        /// <param name="properties">The properties.</param>
        /// <returns>The additional properties.</returns>
        private static List<PropertyInfo> ResolvePropertyMappings(
            T value,
            IDictionary<string, EntityProperty> properties)
        {
            var objectProperties = value.GetType().GetProperties();

            return
                objectProperties.Where(objectProperty => properties.ContainsKey(objectProperty.Name) == false).ToList();
        }

        /// <inheritdoc />
        public string ETag
        {
            get;
            set;
        }

        /// <inheritdoc />
        public string PartitionKey
        {
            get
            {
                if (_partitionKey == null)
                {
                    _partitionKey = BuildPartitionKey();
                }

                return _partitionKey;
            }

            set
            {
                _partitionKey = value;
            }
        }

        /// <inheritdoc />
        public string RowKey
        {
            get
            {
                if (_rowKey == null)
                {
                    _rowKey = BuildRowKey();
                }

                return _rowKey;
            }

            set
            {
                _rowKey = value;
            }
        }

        /// <inheritdoc />
        public DateTimeOffset Timestamp
        {
            get;
            set;
        }

        /// <summary>
        ///     Gets the value managed by the adapter.
        /// </summary>
        /// <value>
        ///     The value.
        /// </value>
        public T Value
        {
            get
            {
                return _value;
            }
        }
    }
}

Code check in procedure

I’ve been running this check in procedure for several years with my development teams. The intention here is for developers to get their code into an acceptable state before submitting it to source control. It attempts to avoid some classic bad habits around source control, such as:

  • Check in changes at the end of each day
  • Missing changeset comments
  • Using the build system as point of compiler/quality validation
  • Big bang changesets
  • Cross purpose changesets

Changeset Contents

Changesets need to be related to a particular set of related changes. A changeset should not include changes or functionality from unrelated pieces of work. This makes reviewing changesets and work tracking very difficult. If you do need to work on unrelated pieces of work, shelve the prior work (undoing local changes) and start working on the new piece of work. Once a piece of work is checked in according to the procedure below, the previous shelveset can be brought back down to your local workspace and you can continue to work on it.

Check In Procedure

The following set of actions must be taken in order to check in changes to source control.

  1. Pre- Check-in
    • Code is functioning correctly and to spec.
    • All code comments are correct and well formatted
    • Code has been cleaned up and is consistent to team standards
  2. Run get latest on the solution
    • Fix any merge issues
  3. Undo any files that haven't changed - see Quick tip for undoing unchanged TFS checkouts
  4. Switch to Release build
  5. Rebuild solution (not just build)
    • Fix any compilation errors
    • Fix any compilation warnings that can be addressed
  6. Deploy database projects to local machine as required
  7. Run all tests
    • They must all pass
  8. Write a comment that describes the changeset
  9. Assign a work item to the changeset
  10. Raise a code review request if the changeset contains code changes
    • Minor changesets that do not change code or have any functional change do not require a review
  11. Verify that no other check ins have occurred since doing #1
  12. Check in
  13. Wait for build to complete (you can do other work during this process)
    • Verify build successful or investigate any failures

Using WinMerge with VS2013

I’ve finally gotten around to adding some reg files for using WinMerge with VS2013. You can download them from the bottom of my Using WinMerge with TFS post. These reg files will configure VS2013 to use WinMerge for TFS diff/merge operations (no Visual Studio restart is required).

Entity Adapter for Azure Table Storage

When working with Azure Table Storage you will ultimately have to deal with ITableEntity. My solution to date has been to create a class that derives from my model class and then implement ITableEntity. This derived class can them provide the plumbing for table storage while allowing the layer to return the correct model type.

The problem here is that ITableEntity is still leaking outside of the Azure DAL even though it is represented as the expected type. While I don’t like my classes leaking knowledge inappropriately to higher layers I also don’t like plumbing logic that converts between two model classes that are logically the same (although tools like AutoMapper do take some of this pain away).

Using an entity adapter is a really clean way to get your cake and eat it. The original code of this concept was posted by the Windows Azure Storage Team (you can read it here). I’ve taken that code and tweaked it slightly to make it a little more reusable.

namespace MyProject.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;

    internal abstract class EntityAdapter<T> : ITableEntity where T : class, new()
    {
        private string _partitionKey;

        private string _rowKey;

        private T _value;

        protected EntityAdapter()
        {
        }

        protected EntityAdapter(T value)
        {
            Guard.That(value, "value").IsNotNull();

            _value = value;
        }

        /// <inheritdoc />
        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            _value = new T();

            TableEntity.ReadUserObject(_value, properties, operationContext);

            ReadValues(properties, operationContext);
        }

        /// <inheritdoc />
        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            var properties = TableEntity.WriteUserObject(Value, operationContext);

            WriteValues(properties, operationContext);

            return properties;
        }

        protected abstract string BuildPartitionKey();

        protected abstract string BuildRowKey();

        protected virtual void ReadValues(
            IDictionary<string, EntityProperty> properties, 
            OperationContext operationContext)
        {
        }

        protected virtual void WriteValues(
            IDictionary<string, EntityProperty> properties, 
            OperationContext operationContext)
        {
        }

        /// <inheritdoc />
        public string ETag
        {
            get;
            set;
        }

        /// <inheritdoc />
        public string PartitionKey
        {
            get
            {
                if (_partitionKey == null)
                {
                    _partitionKey = BuildPartitionKey();
                }

                return _partitionKey;
            }

            set
            {
                _partitionKey = value;
            }
        }

        /// <inheritdoc />
        public string RowKey
        {
            get
            {
                if (_rowKey == null)
                {
                    _rowKey = BuildRowKey();
                }

                return _rowKey;
            }

            set
            {
                _rowKey = value;
            }
        }

        /// <inheritdoc />
        public DateTimeOffset Timestamp
        {
            get;
            set;
        }

        public T Value
        {
            get
            {
                return _value;
            }
        }
    }
}

This class has the flexibility to build a partition and row key for simple adapter usage and then be extended to override ReadValues and WriteValues to store additional metadata with your value for more complex scenarios. To write your value to table storage you simply wrap it in a new instance of your adapter which will pass the value down to the appropriate base constructor. Reading the entity from table storage will then select the Value property on the way back out.

This method allows for the adapter to be an internal bridge between your model class and table storage. The type being returned from the DAL is now POCO while table storage has an ITableEntity that it can use.

Writing batches to Azure Table Storage

Writing records to Azure Table Storage in batches is handy when you are writing a lot of records because it reduces the transaction cost. There are restrictions however. The batch must:

  • Be no more than 100 records
  • Have the same partition key
  • Have unique row keys

Writing batches is easy, even adhering to the above rules. The problem however is that it can start to result in a lot of boilerplate style code. I created a batch writer class to abstract this logic away.

namespace MyProject.Server.DataAccess.Azure
{
    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.Globalization;
    using System.Threading.Tasks;
    using Microsoft.WindowsAzure.Storage.Table;
    using Seterlund.CodeGuard;
    using MyProject.Server.DataAccess.Azure.Properties;

    /// <summary>
    ///     The <see cref="TableBatchWriter" />
    ///     class manages the process of writing a batch of entitites to a <see cref="TableBatchOperation" /> instance.
    /// </summary>
    [CLSCompliant(false)]
    public class TableBatchWriter
    {
        /// <summary>
        ///     The maximum ats table batch size.
        /// </summary>
        private const int MaxAtsTableBatchSize = 100;

        /// <summary>
        ///     The batch tasks.
        /// </summary>
        private readonly List<Task> _batchTasks;

        /// <summary>
        ///     The table to write the batch to.
        /// </summary>
        private readonly CloudTable _table;

        /// <summary>
        ///     The current operation.
        /// </summary>
        private TableBatchOperation _currentOperation;

        /// <summary>
        ///     The partition key for the current batch.
        /// </summary>
        private string _currentPartitionKey;

        /// <summary>
        ///     The row keys for the current partition key.
        /// </summary>
        private List<string> _partitionRowKeys;

        /// <summary>
        ///     The total items written to the table.
        /// </summary>
        private int _totalItems;

        /// <summary>
        ///     Initializes a new instance of the <see cref="TableBatchWriter" /> class.
        /// </summary>
        public TableBatchWriter(CloudTable table)
        {
            Guard.That(() => table).IsNotNull();

            _table = table;

            _batchTasks = new List<Task>();
            _partitionRowKeys = new List<string>();
            _currentOperation = new TableBatchOperation();
        }

        /// <summary>
        ///     Adds the specified entity.
        /// </summary>
        /// <param name="entity">The entity.</param>
        /// <exception cref="System.InvalidOperationException">The entity has a row key conflict in the current batch.</exception>
        public void Add(ITableEntity entity)
        {
            Guard.That(() => entity).IsNotNull();

            if (Count == 0)
            {
                // This is the first entry
                _currentPartitionKey = entity.PartitionKey;
            }
            else if (entity.PartitionKey != _currentPartitionKey)
            {
                Debug.WriteLine(
                    "PartitionKey changed from '{0}' to '{1}' at index {2}. Writing batch of {3} items to table storage.",
                    _currentPartitionKey,
                    entity.PartitionKey,
                    _totalItems - 1,
                    Count);

                WriteBatch();

                _partitionRowKeys = new List<string>();
                _currentPartitionKey = entity.PartitionKey;
            }
            else if (_partitionRowKeys.Contains(entity.RowKey))
            {
                // There are existing items in the batch and we haven't changed partition key
                var message = string.Format(
                    CultureInfo.CurrentCulture,
                    Resources.TableBatchWriter_RowKeyConflict,
                    _currentPartitionKey,
                    entity.RowKey);

                throw new InvalidOperationException(message);
            }

            _partitionRowKeys.Add(entity.RowKey);
            _currentOperation.InsertOrReplace(entity);
            _totalItems++;

            if (Count == MaxAtsTableBatchSize)
            {
                Debug.WriteLine(
                    "Batch count of {0} has been reached at index {1}. Writing batch to table storage.",
                    MaxAtsTableBatchSize,
                    _totalItems - 1);

                WriteBatch();
            }
        }

        /// <summary>
        ///     Adds the items.
        /// </summary>
        /// <param name="items">The items.</param>
        public void AddItems(IEnumerable<ITableEntity> items)
        {
            Guard.That(() => items).IsNotNull();

            foreach (var item in items)
            {
                Add(item);
            }
        }

        /// <summary>
        ///     Executes the batch writing asynchronously.
        /// </summary>
        /// <returns>A <see cref="Task" /> value.</returns>
        public async Task ExecuteAsync()
        {
            // Check if there is a final batch that has not been actioned yet
            if (Count > 0)
            {
                Debug.WriteLine("Writing final batch of {0} entries to table storage.", Count);

                WriteBatch();
            }

            if (_batchTasks.Count == 0)
            {
                return;
            }

            await Task.WhenAll(_batchTasks).ConfigureAwait(false);

            // Clean up resources
            _batchTasks.Clear();
            _partitionRowKeys = new List<string>();
            _currentOperation = new TableBatchOperation();
        }

        private void WriteBatch()
        {
            var task = _table.ExecuteBatchAsync(_currentOperation);

            _batchTasks.Add(task);

            _currentOperation = new TableBatchOperation();
        }

        /// <summary>
        ///     Gets the count.
        /// </summary>
        /// <value>
        ///     The count.
        /// </value>
        public int Count
        {
            get
            {
                return _currentOperation.Count;
            }
        }
    }
}

With this class you can add as many entities as you like and then wait on ExecuteAsync to finish off the work. The only issue that this class doesn’t cover is where you have a RowKey conflict that happens to fall across batches. Not much you can do about that though.

VS2013 project templates fails to build on TF Service

My upgrade pain with VS2013 and the Azure SDK 2.2 continues. Hosted build now fails with the following error:

The task factory "CodeTaskFactory" could not be loaded from the assembly "C:\Program Files (x86)\MSBuild\12.0\bin\amd64\Microsoft.Build.Tasks.v4.0.dll". Could not load file or assembly 'file:///C:\Program Files (x86)\MSBuild\12.0\bin\amd64\Microsoft.Build.Tasks.v4.0.dll' or one of its dependencies. The system cannot find the file specified.

While my Polish is non-existent, the answer can be found at http://www.benedykt.net/2013/10/10/the-task-factory-codetaskfactory-could-not-be-loaded-from-the-assembly/. The project templates for the web and worker role projects uses ToolsVersion=”12.0”. This needs to be changed to ToolsVersion=”4.0” for hosted build to be successful.

VS2013 won’t start debugging

I hit this one a couple of days ago and it had me scratching my head for a while.

The debugger cannot continue running the process. Unable to start debugging.
I thought it was an issue with the tooling, perhaps something I uninstalled or installed. I had installed VS2013 with Azure SDK 2.1, then updated with 2.2 when it came out but I had also uninstalled some packages related to VS2010 which I have used for years.

Turns out that this error presents itself when the solution doesn’t have something to debug. The message is a little misleading though.

My solution starts multiple projects on F5. One project is an Azure cloud project with web and worker roles (debugger attached) while the other is a local STS website (no debugger attached), all of which run in IIS Express. This error popped up when there were either no projects set to run or when the STS project was set to launch without the debugger and the cloud project was set to None for the multiple project start. Either of these cases causes VS not to debug because there is nothing that is configured for it to attach too.

Rory Primrose | Modifying application configuration depending on build type

Modifying application configuration depending on build type

I want to change some values in my application config file at build time depending on the build type (Debug or Release for example). Web deployment projects added this functionality, but it is tightly coupled to that project type. I want to use this functionality in any .Net project type.

Is there a way to do this?

blog comments powered by Disqus