This page has a great resource for writing xml documentation for code in Visual Studio and Sandcastle. It outlines the differences between the xml support of Visual Studio and Sandcastle. It is a great resource especially because Visual Studio doesn’t understand all the xml elements supported by Sandcastle. The document is now a year and a half old, but is still very accurate as the Visual Studio xml documentation schema and Sandcastle support haven’t changed much (if at all) since.
I have had a performance bottleneck in a load test that I have been running. The load test run against some tracing components which ultimately invoke TraceSource and TraceListener methods. I have been wondering why performance drops through the floor as more and more users come online and the call count increases. I have used Reflector a lot of times to review the implementation of TraceSource and TraceListener to get a feel for what they do. I remembered that global locking may be a problem.
The methods on TraceSource check TraceInternal.UseGlobalLock (also referenced by Trace.UseGlobalLock) which is determined by the system.diagnostics/trace/useGlobalLock configuration value:
I’ve been performance testing a WCF service recently and working away at the bottlenecks in the system. After fixing a few performance issues in the components behind the service endpoint (service implementation and beyond), I was still getting really bad throughput calling the distributed service. The service in this case is hosted on a Windows Server 2003 VM. While it is not on physical hardware, I should be able to achieve better performance than the results I was getting.
After 90 seconds into a load test, the resources on the server got saturated and performance dropped through the floor. After this happened for a minute or so, timeouts and security negotiation failures occurred and test executions essentially halted for the remainder of the load test. I noticed that once service requests were no longer being processed that the server was no longer stressed (CPU dropped back down to normal).
There are a few occasions when I have used System.Diagnostics.Trace rather than a System.Diagnostics.TraceSource implementation. Those occasions are limited to scenarios where the consumers of the components didn’t write them, have little interest in their inner workings and don’t need to troubleshoot them. Framework/toolkit type components are the most common implementations that face this.
For example, I have recently done some work on custom tracing implementations that make it really easy for developers to add activity tracing and activity correlation to their code. I wanted to output some messages for diagnostic purposes if there were unexpected issues encountered in the tracing component. Tracing is the right tool for the job, however given that the component is all about tracing, what implementation do I use to trace the information required?
I have been writing lots of unit tests for my Toolkit project on CodePlex. The most recent work is adding activity tracing support. As I was writing these unit tests, I came across a bug in the unit testing framework in Visual Studio. If a logical operation is started in the CorrelationManager with a non-serializable object, but not stopped before the unit test exits then the unit test adapter throws an exception.
This is easily reproduced with the following code:
A couple of years ago I bought a new Dell laptop. It was a middle range spec that I expected to throw more hardware at in the subsequent years. Sure enough, the hard drive became too small, too slow and the machine good certainly do with more than 2Gb RAM.
RAM was the primary concern. It was running out often enough that Vista was constantly going to virtual memory on a slow drive without much space to work with. I ran the analysis tool over at Crucial which to my complete surprise told me that the laptop only supports 2 RAM slots, each of which can only handle a capacity of 1Gb/stick. Surely this was not right. I searched the Dell site and found the specs for the hardware which told the same sad story. Dell in their wisdom sold a Vista laptop that was hardware limited to 2Gb RAM. So I’m not just surprised now, I’m utterly shocked.
I just encountered a curly situation with performance counters. I have added performance counters to a WCF service which has been deployed out to a host platform. When I fire up perfmon.exe on my local machine, the counter category isn’t in the list of categories when I specify the remote machine.
All the research on the net seems to point towards a permissions problem. I am an administrator on the server however so this isn’t the problem. I can also see other performance categories and counters for that machine, but not the ones I have just installed.
The answer to this one is unexpected. A restart of the Remote Registry service on the server is required. It seems that the remote registry service uses some kind of internal cache of the registry. After restarting that service, the performance counters I’m after are now available to my local machine.
I am working with a bit of code (Manager) that involves caching values based on the type injected as a dependency (Resolver). The same Manager type can be used with different Resolvers and the keys used to store items in the Manager cache that are returned from the different Resolvers should be different.
To achieve this, I generate a cache key that identifies the manager (constant string), the assembly qualified name of the resolver and then the name of the item, TraceSource instances in this case. This means that two resolvers injected into two different managers that are asked to return a TraceSource instance of the same name, will be stored in the managers internal cache as two entries.
I posted previously about creating EventLog sources without administrative rights. Part of this solution requires that account running the application has rights to create subkeys and write values to the EventLog in the registry. WiX is being used as the installation product so the answer is something like this for the registry key:
There is a lot of information around that discusses the differences between classes and structs. Unfortunately there isn’t a lot of information available about when to use one over the other.
MSDN has a good resource which provides guidance on how to choose between classes and structs. It starts by describing the differences between the two and then provides the following advice.
Consider defining a structure instead of a class if instances of the type are small and commonly short-lived or are commonly embedded in other objects. Do not define a structure unless the type has all of the following characteristics:
- It logically represents a single value, similar to primitive types (integer, double, and so on).
- It has an instance size smaller than 16 bytes.
- It is immutable.
- It will not have to be boxed frequently.
If one or more of these conditions are not met, create a reference type instead of a structure. Failure to adhere to this guideline can negatively impact performance.