Executing build tasks without a build server – Design

At work we run TFS2010 with build controllers, build agents and lab management. I have customised the build workflow so that we can get a lot more functionality out of the automated build process. Some of these customisations are things like:

  • Finding the product version number
  • Incrementing the product version number
  • Updating the build name with the new product version
  • Updating files prior to compilation
  • Updating files after compilation
  • Building Sandcastle documentation
  • Deploying MSIs to remove servers
  • Filtering drop folder output (no source, just build output like configuration files, documentation and MSIs)

I use Codeplex at home for my personal projects. This means that I don’t get all the goodness that comes with TFSBuild in 2010. I still want the automatic version management functionality listed above however. I have the following functionality requirements to make this happen:

  • Determine the current product version number
  • Increment the product version number
    • Must be done prior to any project compilation
  • Sync Wix project versioning with product version number
    • Needs to happen before wix project compilation
    • Needs to cater for wix variables being used for version information
  • Allow TFS checkout for files under source control
    • This is important so that incremented version numbers continue to increment from previous version under source control
    • Must also cater for when solution is loaded without TFS availability (broken source control bindings etc)
  • Push product/wix version into wix output name
  • Failure to execute these actions successful will fail the solution/project build
  • No installation required
  • No configuration required
    • All customisation of tasks is done using command line arguments
  • Extensible using MEF
    • Again, no configuration required to add new tasks

The next post will outline how I was able to make this happen using an extensible MEF task based application.

WPF and the link that just won’t click

I’ve been playing with WPF over the last month. It has been great to finally work on a program that is well suited to this technology. One of the implementation details that has surprised me is that the Hyperlink control doesn’t do anything when you click the link.

I can only guess what the design reason is behind this. The only thing the control seems to do is fire off a RequestNavigate event. Every implementation of this control in a form then needs to manually handle this event to fire off the navigation uri. This is obviously going to be duplicated effort as each usage of the hyperlink control will execute the same logic to achieve this outcome.

I have put together the following custom control for my project to suit my purposes.

namespace Neovolve.Switch.Controls
{
    using System;
    using System.Diagnostics;
    using System.Windows.Documents;

    public class ClickableLink : Hyperlink
    {
        protected override void OnClick()
        {
            base.OnClick();

            Uri navigateUri = ResolveAddressValue(NavigateUri);

            if (navigateUri == null)
            {
                return;
            }

            String address = navigateUri.ToString();

            ProcessStartInfo startInfo = new ProcessStartInfo(address);
            
            Process.Start(startInfo);
        }
        
        private static Uri ResolveAddressValue(Uri navigateUri)
        {
            if (navigateUri == null)
            {
                return null;
            }

            // Disallow file urls
            if (navigateUri.IsFile)
            {
                return null;
            }

            if (navigateUri.IsUnc)
            {
                return null;
            }

            String address = navigateUri.ToString();

            if (String.IsNullOrWhiteSpace(address))
            {
                return null;
            }

            if (address.Contains("@") && address.StartsWith("mailto:", StringComparison.OrdinalIgnoreCase) == false)
            {
                address = "mailto:" + address;
            }
            else if (address.StartsWith("http://", StringComparison.OrdinalIgnoreCase) == false &&
                     address.StartsWith("https://", StringComparison.OrdinalIgnoreCase) == false)
            {
                address = "http://" + address;
            }

            try
            {
                return new Uri(address);
            }
            catch (UriFormatException)
            {
                return null;
            }
        }
    }
}

Neovolve ReSharper Plugins 2.0 released

It has been a few years since I have updated my ReSharper plugin project that was originally released three years ago. The reason for the project was that ReSharper natively converts CLR types (System.Int32) to its alias (int). StyleCop also adds this support in its ReSharper plugin. Unfortunately nothing in the marketplace provides a conversion from an alias type to its CLR type.

I always prefer the CLR types over the alias type names. This allows me to easily identify what is a C# keyword and what is a type reference.

The original 1.0 (and subsequent 1.1) version provided the ability to switch between C# alias types and their CLR type equivalents using ReSharper’s code cleanup profile support. This version adds code inspection and QuickFix support. It also adds better support for type conversions in xmldoc comments.

Code Cleanup

The code cleanup profile settings for this plugin are now found under the Neovolve category. This is the only change to the settings used for code cleanup.

image

Code Inspection

ReSharper code inspection allows for plugins to notify you when your code does not match defined inspection rules. This release of the plugin adds code inspection to detect when a value type is written in a way that is not desired according to the your ReSharper settings.

The code inspection settings for this plugin can be found under the Neovolve category.

image

Code highlights will then notify you when your code does not match the configured rules. For example, the settings in the above screenshot identify that type alias definitions show up as a suggestion to be converted to their CLR types. This can be seen below.

image

QuickFix

ReSharper has had quick fix support for several years. This version now supports QuickFix bulbs based on the information identified by the code inspection rules. Giving focus to the suggestion will then allow for a quick fix action using Alt + Enter.

image

This version adds better support for type identification in xmldoc comments. Code inspection and QuickFix support also come into play here as well.

image

You can grab this release from the CodePlex project here.

Tip for building a private domain controller for Lab Management with Network Isolation

There are obvious benefits by using Lab Management for testing your software. It is a fantastic environment for test teams to test the software written by a development team.

The requirement I had with my test labs was that I need to use domain controlled security within the test lab as this is what is used in production. I also do not want any impact on the development or production domains. The solution is to use a domain controller (DC) within the lab environment rather than reference the domain hosting the lab environment.

Having a test DC means that it needs to be isolated from the hosting network. This avoids AD, DNS and DHCP conflicts between the development and test networks. Lab management can be configured for network isolation to get around this problem. This means that the private DC will have a network connection that is private to the lab, while all the other machines in the lab will have one NIC for the private lab network and a second NIC for access out to the hosting environment. This setup can be seen in the SCVMM network diagram below with the machine at the top of the diagram being the private DC.

image

The problem I had for several weeks was that the private DC lost its Windows activation when it was stored into the VMM library for deployment out to a lab environment. You are restricted to phone activation in this case because once the stored VM is put into a lab with network isolation there is no internet support for automatic activation on the DC. This then needs to be done every time you deploy a lab environment.  

I followed the MSDN article steps that describe how to create a private DC for labs but there was nothing specific about how to handle this scenario. The step in question is at the bottom of the article where it says:

6. Shut down the virtual machine, and store it in the SCVMM library.

    a. Do not turn off the Active Directory VM. You have to shut it down correctly.

    b. Do not generalize the Active Directory VM, either by running Sysprep or by storing the virtual machine as a template in SCVMM.

I followed this step to the letter and stored my DC in the VMM library for use in labs. This was the step that caused the VM to lose its Windows activation. I happened to stumble across the solution to this problem as I had to rebuild the test DC yesterday. The answer is to clone the DC rather than store it.

image

The Clone wizard provides the option of where to place the clone VM. You want to select Store the virtual machine in the library.

image

The private DC can be deployed out to a lab environment now that it is stored in the library. The Windows activation is retained using this technique so the private DC should be ready for immediate use in the lab.

Beware of lifetime manager policy locks in Unity

I have created a caching dependency injection slice in order to squeeze more performance from the DAL in a workflow service. What I found was that the service always hit a timeout when the caching slice was put into the Unity configuration. I spent half a day working with AppFabric monitoring, event logs and all the information I could get out of diagnostic tracing for WCF, WF, WIF and custom sources. After not being able to get any answers along with futile debugging efforts, I realised that I could profile the service to see where the hold up was.

The profiler results told me exactly where to look as soon as I hit the service at got a timeout exception.

image

All the time is getting consumed in a single call to Microsoft.Practices.Unity.SynchronizedLifetimeManager.GetValue(). The first idea that comes to mind is that there is a lock on an object that is not being released. Reflector proves that this is exactly the case.

image

The GetValue method obtains a lock for the current thread and only releases it if a non-null value is held by the lifetime manager. This logic becomes a big issue if the lifetime manager holds a null value and two different threads call GetValue. I would like to know why this behaviour is there as it is intentional according to the documentation of the function.

This is what is happening in my service. In the profiling above you can see that the lifetime manager is getting called from my Unity extension for disposing build trees. While the extension is not doing anything wrong, it can handle this scenario by using a timeout on obtaining a value from the lifetime manager.

private static Object GetLifetimePolicyValue(ILifetimePolicy lifetimeManager)
{
    if (lifetimeManager is IRequiresRecovery)
    {
        // There may be a lock around this policy where a null value will result in an indefinite lock held by another thread
        // We need to use another thread to access this item so that we can get around the lock using a timeout
        Task<Object> readPolicyTask = new Task<Object>(lifetimeManager.GetValue);

        readPolicyTask.Start();

        Boolean taskCompleted = readPolicyTask.Wait(10);

        if (taskCompleted == false)
        {
            return null;
        }

        return readPolicyTask.Result;
    }

    return lifetimeManager.GetValue();
}

This implementation is not ideal, but it is unfortunately the only way to handle this case as there is no way to determine whether another thread has a lock on the lifetime manager.

Testing the service again with the profiler then identified a problem with this workaround.

image

This workaround will consume threads that will be held on a lock and potentially never get released. This is going to be unacceptable as more and more threads attempt to look at values held in the lifetime manager policies, ultimately resulting in thread starvation.

The next solution is to use a lifetime manager that gets around this issue by never allowing the lifetime manager to be assigned a null value.

namespace Neovolve.Jabiru.Server.Services
{
    using System;
    using System.Diagnostics.Contracts;
    using Microsoft.Practices.Unity;

    public class SafeSingletonLifetimeManager : ContainerControlledLifetimeManager
    {
        public override void SetValue(Object newValue)
        {
            if (newValue == null)
            {
                throw new ArgumentNullException("newValue");
            }

            base.SetValue(newValue);
        }
    }
}

This idea fails to get around the locking issue when the lifetime manager is created but never has a value assigned. The next version of this SafeSingletonLifetimeManager solves this by managing its own locking logic around whether a non-null value has been assigned to the policy.

vnamespace Neovolve.Jabiru.Server.Services
{
    using System;
    using System.Threading;
    using Microsoft.Practices.Unity;
    using Neovolve.Toolkit.Threading;

    public class SafeSingletonLifetimeManager : ContainerControlledLifetimeManager
    {
        private readonly ReaderWriterLockSlim _syncLock = new ReaderWriterLockSlim();

        private Boolean _valueAssigned;

        public override Object GetValue()
        {
            using (new LockReader(_syncLock))
            {
                if (_valueAssigned == false)
                {
                    return null;
                }

                return base.GetValue();
            }
        }

        public override void SetValue(Object newValue)
        {
            using (new LockWriter(_syncLock))
            {
                if (newValue == null)
                {
                    _valueAssigned = false;
                }
                else
                {
                    _valueAssigned = true;
                }

                base.SetValue(newValue);
            }
        }
    }
}

Using this policy now avoids the locking problem described in this post. I would still like to know the reason for the locking logic as this SafeSingletonLifetimeManager is completely circumventing that logic.

TFS and WF4: The diff noise problem

For a long time the most popular post I have on this site is about how to configure [VS] to use WinMerge as the merge/diff tool for TFS rather than using the feature poor out of the box software. Sometimes the nature of the files under development result in version differences that have a lot of noise regardless of the diff/merge tool that you use.

Unfortunately WF is one of the common offenders. I absolutely love WF, but am disappointed that designer state information is persisted with the workflow definition rather than in a user file that is merged in the IDE. The result of this is that the activity xaml file changes if you collapse a composite activity, such as the Sequence activity. The actual workflow definition has not changed, but it is a new version of the file as far as a diff tool and TFS goes.

For example, I have collapsed lots of activities on one of my workflows. The resulting diff using WinMerge looks like the following:

image

There is a lot of noise here. It is all designer state information rather than actual changes to the workflow definition. There are several culprits in WF4 that cause this noise.

  • sap:VirtualizedContainerService.HintSize
  • <x:Boolean x:Key="IsExpanded">
  • <x:Boolean x:Key="IsPinned">

Thankfully WinMerge has a great feature for applying a line filter expression (Tools –> Filters –> Linefilters). These can help to reduce a lot of this noise.

image

I have put together three expressions to cover WF4.

  • ^.*sap:VirtualizedContainerService\.HintSize="\d+,\d+".*$     filters sap:VirtualizedContainerService.HintSize when it is defined in an attribute
  • ^.*<sap:VirtualizedContainerService.HintSize>\d+,\d+</sap:VirtualizedContainerService.HintSize>.*$     filters sap:VirtualizedContainerService.HintSize when it is defined in an element
  • ^.*<x:Boolean x:Key="(IsExpanded|IsPinned)">.+</x:Boolean>.*$     filters <x:Boolean x:Key="IsExpanded"> and <x:Boolean x:Key="IsPinned"> elements

The above diff of workflow xaml with these filters applied now looks like the following.

image

There are several things to notice about the result of line filters. The overview of differences is now much less noisy. The file differences filtered by line filters are indicated by a light-yellow but do not show up in the diff overview or participate in keyboard navigation. The line filters are not able to filter all of the above WF4 issues as can be seen in the red change above. This is because the xml line is missing on one side rather than being just a change to the line. These changes to the xml still get picked up by the diff tool.

WinMerge stores the line filters in the registry. The easiest way to install them is to import the registry entry below.

WinMerge Line Filters.reg (978.00 bytes)

BusinessFailureScope activity with deep nested support

I wrote a series of posts late last year about a custom WF activity that collates a set of business failures that child evaluator activities identify into a single exception. At the time, the only way I could get the child evaluator activities to communicate failures to the parent scope activity was by using a custom WF extension to store the failures and manage the exception throwing logic.

The relationship between the parent scope activity and child evaluator activity works like this. The parent scope activity registers all the child evaluator activities with the extension in order to create a link between them. The extension then holds on to the failures from the child activities so that it can throw an exception for them as a set when the parent scope completes. If there was no link between the parent scope and a child evaluator then the extension would throw the exception directly for the singular failure.

One of the limitations of this design was that I could not create a link between a parent scope and child evaluator activity when the child activity was not a direct descendent. This was quite limiting because you could not branch off into validation checks and run evaluators within those sub-branches. You can see from the post that describes the extension that the implementation is also very messy.

If only I had known about Workflow Execution Properties back then. You can see a good description of execution properties in Tim’s post.

A great feature of execution properties in a workflow is that they are scoped to a specific sub-tree of the workflow structure. This is perfect for managing sets of failures between the parent scope and child evaluator activities. This method also allows for child evaluators to communicate failures to the parent scope from any depth of the workflow sub-tree.

image

namespace Neovolve.Toolkit.Workflow.Activities
{
    using System;
    using System.Collections.ObjectModel;
    using System.Runtime.Serialization;

    [DataContract]
    internal class BusinessFailureInjector<T> where T : struct
    {
        public BusinessFailureInjector()
        {
            Failures = new Collection<BusinessFailure<T>>();
        }

        public static String Name
        {
            get
            {
                return "BusinessFailureInjector" + typeof(T).FullName;
            }
        }

        [DataMember]
        public Collection<BusinessFailure<T>> Failures
        {
            get;
            set;
        }
    }
}

An instance of the BusinessFailureInjector<T> class is the value that is added to the parent scope’s execution context as an execution property. This class simply holds the failures that child evaluator activities find. One thing to notice about BusinessFailureInjector is the usage of DataContract and DataMember attributes. WF does all the heavy lifting for us with regard to persistence. The data held in the execution property automatically gets persisted and then restored for us. This was done manually in the old extension version as well as manually tracking the links between scopes and evaluators.

There are some minor changes to the code in the BusinessFailureScope and BusinessFailureEvaluator activities to work with the execution property rather than the extension.

The parent scope adds the execution property to its context when it is executed.

BusinessFailureInjector<T> injector = new BusinessFailureInjector<T>();

context.Properties.Add(BusinessFailureInjector<T>.Name, injector);

If the child activity can’t find the execution property then it throws the failure exception straight away for the single failure. If the execution property is found then it adds its failure to the collection of failures.

BusinessFailureInjector<T> injector = context.Properties.Find(BusinessFailureInjector<T>.Name) as BusinessFailureInjector<T>;

if (injector == null)
{
    throw new BusinessFailureException<T>(failure);
}

injector.Failures.Add(failure);

The parent scope then checks with the execution property to determine if there are any failures to throw in an exception.

private static void CompleteScope(NativeActivityContext context)
{
    BusinessFailureInjector<T> injector = context.Properties.Find(BusinessFailureInjector<T>.Name) as BusinessFailureInjector<T>;

    if (injector == null)
    {
        return;
    }

    if (injector.Failures.Count == 0)
    {
        return;
    }

    throw new BusinessFailureException<T>(injector.Failures);
}

Overall, the complexity of the code has been significantly reduced by this this method of inter-activity communication. In addition, child evaluators can now exist anywhere under a parent scope activity and still have their failures managed by the parent scope.

You can download the updated version of BusinessFailureScope in the latest beta of my Neovolve.Toolkit project out on CodePlex.

WF content correlation and security

I have posted previously about using content correlation in WF services to implement a service session. One issue that must be highlighted regarding content correlation is about the security of the session in relation to hijack attacks.

I am writing a workflow service that is a combination of IIS, WCF, WF, WIF and AppFabric. WIF is used to secure the WCF service to ensure that only authenticated users can hit the endpoint. WIF then handles claim demands raised depending on the actions taken within the service by the authenticated user. A session hijack can occur with content correlation where authenticated UserA starts the service and then authenticated UserB takes the content used for correlation and makes their own call against the service. In this case UserB is authenticated and passes through the initial WIF authentication. UserB could then potentially take actions or obtain data from the service related to UserA.

The way to protect the service against this session hijack attack is to hold on to the identity of the user that started the session. Each service call within the session should then validate the identity of the caller against the original identity. The service execution can continue if the identities match, otherwise a SecurityException should be thrown.

In my application, the StartSession service operation does this first part.

image

The StartSession service operation is the first for the session and (among other things) configures the service for content correlation. It uses my ReceiveIdentityInspector activity to obtain the identity of the user that is invoking the service. It then stores this identity in a workflow variable that is scoped in such a way that it is available to the entire lifecycle of the workflow.

Each other service operation then uses the same ReceiveIdentityInspector to get the identity of the user invoking those operations.

image

All these other service operations can then compare the two identities to protect the service against a hijack attack. The following condition is set against the Invalid Identity activity above:

ReadSegmentIdentity Is Nothing OrElse ReadSegmentIdentity.IsAuthenticated = False OrElse ReadSegmentIdentity.Name <> SessionIdentity.Name

A SecurityException is thrown if this condition is evaluated as True. An authenticated user is now unable to hijack the service of another user even if they can obtain the value used for content correlation.

Another security measure to protect the content correlation value (and all the data of the service) to use ensure that SSL is used to encrypt the traffic for the service. This should not however remove the requirement for the above security check. Additionally, you should also write integration tests that verify that this potential security hole is successfully managed by your service application.

Calling a workflow service operation multiple times

Ron Jacobs has just answered an interesting question over on his blog. The question is about whether a workflow service operation can be invoked multiple times. Ron does not provide the details of the question but the example he provides implies that the implementation of the two invocations of the same service operation may be different as the same operation name is implemented twice in the workflow. This seems like a design issue as far as the service goes but the question itself is still interesting.

If we assume that the implementation of the service operation is the same (as it should be), how do we keep the service alive so that we can invoke the same method multiple times?

The answer is by using some kind of service session using WF correlation. Content correlation is my preferred option because it is independent of infrastructure concerns and does not restrict the WCF bindings available to you. I have previously posted about how to get a workflow service to create a session using content correlation.

With respect to the question put to Ron, you would not be able to achieve this result with just one service operation on the service. Correlation requires the client to provide the correlation value to the service operation. The correlation value must then map to an existing WF instance. This means that the first service operation cannot be the service operation invoked multiple times. You will need a service operation that creates the service session by returning a session identifier that can then be used for content correlation on subsequent service operations. This first operation has the CanCreateInstance set to true and will be the entry point into the service. A DoWhile activity can then allow a service operation to be invoked multiple times within that session. The WF instance will remain alive (or persisted) until the workflow exists. The DoWhile activity prevents this from happening until some kind of exit condition is met.

I have implemented this design in a DataExchange service in my Jabiru project on CodePlex. The StartSession operation generates a Guid and returns it with some other service context information. The DoWhile then has a check for whether the session is completed. The session will be completed by one of the following conditions:

  • a timeout
  • CancelSession is called
  • FinishSession is called

This service design can be seen in the following screenshot (full image is linked).

image

The timeout case is handled in the first pick branch by using a Delay activity and then a check of the current time against when the service was last hit.

image

Each other pick branch has a service operation in it. The first action taken by each of these service operations is to set LastActivity = DateTime.Now() in order to prevent the timeout case. Each of these service operations (such as the ReadSegment operation) within the DoWhile can be invoked multiple times using the same session identifier while the SessionCompleted flag is False.

image

The CancelSession operation simply assigns SessionCompleted flag as True. This will then allow the DoWhile activity to exit and the service session will be finished as far as the client is concerned.

image

Similarly, the FinishSession operation sets the SessionCompleted flag as True and then does some other work relating to actioning the session outcome.

image

Note the Delay activity directly after the Reply activity. This allows the workflow to push the response back to the client and then process more work asynchronously. The same technique is used after the DoWhile so that session related resources on the server can be cleaned up asynchronously after the client has finished with the service.

We have seen here that a service operation can be invoked multiple times by using WF correlation and a DoWhile activity.

Custom DisposalScope activity

The previous post outlined the issues with working with unmanaged and IDisposable resources in WF. To recap, the issues with these resources in WF are:

  • Persistence
  • Restoration after persistence
  • Error handling
    • Messy WF experience to handle this correctly

A custom activity can handle these issues in a much more elegant way than the partially successful implementation in the previous post.

The design goals of this activity are:

  • take in a resource of a generic type (limited to IDisposable)
  • enforce a No Persist Zone to avoid the persistence issue
  • dispose the resource on successful execution
  • dispose the resource on a faulted child activity
  • as always, provide adequate designer support

The code implementation of the custom DisposalScope<T> activity handles these goals.

namespace Neovolve.Toolkit.Workflow.Activities
{
    using System;
    using System.Activities;
    using System.Activities.Presentation;
    using System.Activities.Statements;
    using System.Activities.Validation;
    using System.Collections.ObjectModel;
    using System.ComponentModel;
    using System.Drawing;
    using System.Windows;
    using System.Windows.Markup;
    using Neovolve.Toolkit.Workflow.Properties;

    [ToolboxBitmap(typeof(ExecuteBookmark), "bin_closed.png")]
    [DefaultTypeArgument(typeof(IDisposable))]
    [ContentProperty("Body")]
    public sealed class DisposalScope<T> : NativeActivity<T>, IActivityTemplateFactory where T : class, IDisposable
    {
        public DisposalScope()
        {
            NoPersistHandle = new Variable<NoPersistHandle>();
        }

        public Activity Create(DependencyObject target)
        {
            ActivityAction<T> body = new ActivityAction<T>
                                     {
                                         Handler = new Sequence(), 
                                         Argument = new DelegateInArgument<T>("instance")
                                     };

            return new DisposalScope<T>
                   {
                       Body = body
                   };
        }

        protected override void CacheMetadata(NativeActivityMetadata metadata)
        {
            metadata.AddDelegate(Body);
            metadata.AddImplementationVariable(NoPersistHandle);

            RuntimeArgument instanceArgument = new RuntimeArgument("Instance", typeof(T), ArgumentDirection.In, true);

            metadata.Bind(Instance, instanceArgument);

            Collection<RuntimeArgument> arguments = new Collection<RuntimeArgument>
                                                    {
                                                        instanceArgument
                                                    };

            metadata.SetArgumentsCollection(arguments);

            if (Body == null || Body.Handler == null)
            {
                ValidationError validationError = new ValidationError(Resources.Activity_NoChildActivitiesDefined, true, "Body");

                metadata.AddValidationError(validationError);
            }
        }

        protected override void Execute(NativeActivityContext context)
        {
            NoPersistHandle noPersistHandle = NoPersistHandle.Get(context);

            noPersistHandle.Enter(context);

            T instance = Instance.Get(context);

            context.ScheduleAction(Body, instance, OnCompletion, OnFaulted);
        }

        private void DestroyInstance(NativeActivityContext context)
        {
            T instance = Instance.Get(context);

            if (instance == null)
            {
                return;
            }

            try
            {
                instance.Dispose();

                Instance.Set(context, null);
            }
            catch (ObjectDisposedException)
            {
                // Ignore this exception
            }
        }

        private void OnCompletion(NativeActivityContext context, ActivityInstance completedinstance)
        {
            DestroyInstance(context);

            NoPersistHandle noPersistHandle = NoPersistHandle.Get(context);

            noPersistHandle.Exit(context);
        }

        private void OnFaulted(NativeActivityFaultContext faultcontext, Exception propagatedexception, ActivityInstance propagatedfrom)
        {
            DestroyInstance(faultcontext);

            NoPersistHandle noPersistHandle = NoPersistHandle.Get(faultcontext);

            noPersistHandle.Exit(faultcontext);
        }

        [Browsable(false)]
        public ActivityAction<T> Body
        {
            get;
            set;
        }

        [DefaultValue((String)null)]
        [RequiredArgument]
        public InArgument<T> Instance
        {
            get;
            set;
        }

        private Variable<NoPersistHandle> NoPersistHandle
        {
            get;
            set;
        }
    }
}

The DisposalScope<T> activity enforces a no persist zone. Attempts at persistence by a child activity will result in throwing an exception. The resource is released on either a successful or fault outcome. There is some validation in the activity that ensures that a Body activity has been defined. The activity also uses the IActivityTemplateFactory to create the activity with a Sequence activity for its Body property when it is created on the WF designer.

The designer of the activity handles most of the design time experience.

<sap:ActivityDesigner x:Class="Neovolve.Toolkit.Workflow.Design.Presentation.DisposalScopeDesigner"
                      xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                      xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                      xmlns:s="clr-namespace:System;assembly=mscorlib"
                      xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
                      xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation"
                      xmlns:conv="clr-namespace:System.Activities.Presentation.Converters;assembly=System.Activities.Presentation"
                      xmlns:sadm="clr-namespace:System.Activities.Presentation.Model;assembly=System.Activities.Presentation"
                      xmlns:ComponentModel="clr-namespace:System.ComponentModel;assembly=WindowsBase"
                      xmlns:ntw="clr-namespace:Neovolve.Toolkit.Workflow;assembly=Neovolve.Toolkit.Workflow"
                      xmlns:ntwd="clr-namespace:Neovolve.Toolkit.Workflow.Design">
    <sap:ActivityDesigner.Icon>
        <DrawingBrush>
            <DrawingBrush.Drawing>
                <ImageDrawing>
                    <ImageDrawing.Rect>
                        <Rect Location="0,0"
                              Size="16,16">
                        </Rect>
                    </ImageDrawing.Rect>
                    <ImageDrawing.ImageSource>
                        <BitmapImage UriSource="bin_closed.png"></BitmapImage>
                    </ImageDrawing.ImageSource>
                </ImageDrawing>
            </DrawingBrush.Drawing>
        </DrawingBrush>
    </sap:ActivityDesigner.Icon>
    <sap:ActivityDesigner.Resources>
        <conv:ModelToObjectValueConverter x:Key="modelItemConverter"
                                          x:Uid="sadm:ModelToObjectValueConverter_1" />
        <conv:ArgumentToExpressionConverter x:Key="expressionConverter" />
        <DataTemplate x:Key="Collapsed">
            <TextBlock HorizontalAlignment="Center"
                       FontStyle="Italic"
                       Foreground="Gray">
                Double-click to view
            </TextBlock>
        </DataTemplate>
        <DataTemplate x:Key="Expanded">
            <StackPanel Orientation="Vertical">
                
                    <StackPanel Orientation="Horizontal">
                        <sapv:TypePresenter Width="120"
                                        Margin="5"
                                        AllowNull="false"
                                        BrowseTypeDirectly="false"
                                        Filter="DisposalTypeFilter"
                                        Label="Target type"
                                        Type="{Binding Path=ModelItem.ArgumentType, Mode=TwoWay, Converter={StaticResource modelItemConverter}}"
                                        Context="{Binding Context}" />
                        <TextBox Text="{Binding ModelItem.Body.Argument.Name}"
                             MinWidth="80" />
                        <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.Instance, Mode=TwoWay, Converter={StaticResource expressionConverter}, ConverterParameter=In}"
                                            OwnerActivity="{Binding ModelItem, Mode=OneTime}"
                                            Margin="2" />
                    </StackPanel>

                <sap:WorkflowItemPresenter Item="{Binding ModelItem.Body.Handler}"
                                           HintText="Drop activity"
                                           Margin="6" />
            </StackPanel>
        </DataTemplate>
        <Style x:Key="ExpandOrCollapsedStyle"
               TargetType="{x:Type ContentPresenter}">
            <Setter Property="ContentTemplate"
                    Value="{DynamicResource Collapsed}" />
            <Style.Triggers>
                <DataTrigger Binding="{Binding Path=ShowExpanded}"
                             Value="true">
                    <Setter Property="ContentTemplate"
                            Value="{DynamicResource Expanded}" />
                </DataTrigger>
            </Style.Triggers>
        </Style>
    </sap:ActivityDesigner.Resources>
    <Grid>
        <ContentPresenter Style="{DynamicResource ExpandOrCollapsedStyle}"
                          Content="{Binding}" />
    </Grid>
</sap:ActivityDesigner>

The designer uses a TypePresenter to allow modification of the generic type of the activity. The configuration of the TypePresenter uses the Filter property to restrict the types available to those that implement IDisposable. The designer users an ExpressionTextBox to provide the disposable resource to the activity. The expression can either instantiate the resource directly or provide it by referencing a variable in the parent workflow. Finally, the designer provides a WorkflowItemPresenter that allows designer interaction with the Body activity that gets executed by the activity.

namespace Neovolve.Toolkit.Workflow.Design.Presentation
{
    using System;
    using System.Diagnostics;
    using Neovolve.Toolkit.Workflow.Activities;

    public partial class DisposalScopeDesigner
    {
        [DebuggerNonUserCode]
        public DisposalScopeDesigner()
        {
            InitializeComponent();
        }

        public Boolean DisposalTypeFilter(Type typeToValidate)
        {
            if (typeToValidate == null)
            {
                return false;
            }

            if (typeof(IDisposable).IsAssignableFrom(typeToValidate))
            {
                return true;
            }

            return false;
        }

        protected override void OnModelItemChanged(Object newItem)
        {
            base.OnModelItemChanged(newItem);

            GenericArgumentTypeUpdater.Attach(ModelItem);
        }
    }
}

The code behind the designer provides the filter of the TypePresenter (see this post for details) and the designer support for modifying the generic type of the activity (see this post for details).

The example in the previous post can now be refactored to the following.

image

A workflow variable against held against a parent activity is no longer required to define the resource as this is now exposed directly by the DisposalScope activity. The variable that the activity provides is type safe via the generic definition as can be seen above with the ReadByte method for the FileStream type.

This is now a much cleaner solution as far as a design time experience goes and satisfies all the above design goals. 

You can download this activity in my latest Neovolve.Toolkit 1.1 Beta on the CodePlex site.