No, this isn’t related to the Victoria fires. This is the sunset seen from the deck at home.
I have just encountered a problem where the SQLEXPRESS instance installed on my machine was not starting. It looks like a recent windows update has failed, but also knocked out SQL Server. The event log contains the following entry:
Error 3(error not found) occurred while opening file ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf’ to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary.
After searching around, there seems to be lots of forum posts going back several years about this issue. The problem is that the only known solution seems to be to change the credentials of the SQLEXPRESS service account to Local System. This will then allow the service to start. Doing this through the services console presents a problem however because you can’t set the service credentials back to Network Service as you need to know the password.
I’m working through the best way of getting DataDude to build and deploy a database using TeamBuild. When I kicked off a build, the build script failed with the following error:
Task "SqlBuildTask" Building deployment script for [DatabaseName] : AlwaysCreateNewDatabase, EnableFullTextSearch, BlockIncrementalDeploymentIfDataLoss MSBUILD : Build error TSD158: Cannot open user default database. Login failed. MSBUILD : Build error TSD158: Login failed for user '[TeamBuildUserName]'. Done executing task "SqlBuildTask" -- FAILED.
I have previously posted about using the include element for my XML documentation in cases where there is duplicated content. After a recent code review, the reviewer commented that the include element made it difficult to read the documentation because large parts of the XML documentation were abstracted out to an XML file. This made me look at ways around this and the affect of the include tag had on the tooling support.
As a quick overview, the include element in XML documentation tells the compiler to go to the specified XML file and run an XPath query (recursively). It pulls in the result of that XPath query and injects the content into the XML documentation being generated for a code element.
I have recently been working with tracing performance and have posted several tidbits of information. Here is the overview.
- Use TraceSource instead of Trace
- Disable global locking
- Clear the default listener in configuration
- Don’t collect stacktrace information if not required
- Create TraceSource instances once per name and cache for reuse. I have encountered memory leaks from creating large numbers of instances of the same TraceSource name.
- Create a unique TraceSource and TraceListener for each logical part/tier/layer of the application (locking performance and data segregation)
- Use thread safe listeners if possible
- Check TraceSource.Switch.ShouldTrace before calculating any expensive information to provide to the trace message
On a side note, don’t forget to turn off code coverage for running load tests.
Here is another performance tip with tracing. In configuration, there is the opportunity to define some tracing options. These options determine the actions taken by TraceListener when it writes the footer of the trace record for a given message. One of the options is to output the Callstack.
It takes a bit of work to calculate the callstack. If you don’t need that information, then don’t configure your listeners to calculate it.
To demonstrate the difference, I created two load tests that each ran a unit test that used its own specific TraceSource. The reason for two separate load tests was to avoid a locking issue that would impact the results.
Here is the configuration I used:
Continuing on my performance testing of tracing components, there is another factor I realised that may be impacting the performance I get out of my code.
When the TraceSource.Listeners property is referenced, the collection is initialised using the application configuration. Regardless of whether there is a TraceSource configured for the provided name or what listeners are defined, there is always a default listener that is added to the configured collection of listeners. This is the System.Diagnostics.DefaultTraceListener.
All the tracing methods of this listener implementation call down to an internalWrite method that has the following implementation:
This page has a great resource for writing xml documentation for code in Visual Studio and Sandcastle. It outlines the differences between the xml support of Visual Studio and Sandcastle. It is a great resource especially because Visual Studio doesn’t understand all the xml elements supported by Sandcastle. The document is now a year and a half old, but is still very accurate as the Visual Studio xml documentation schema and Sandcastle support haven’t changed much (if at all) since.
I have had a performance bottleneck in a load test that I have been running. The load test run against some tracing components which ultimately invoke TraceSource and TraceListener methods. I have been wondering why performance drops through the floor as more and more users come online and the call count increases. I have used Reflector a lot of times to review the implementation of TraceSource and TraceListener to get a feel for what they do. I remembered that global locking may be a problem.
The methods on TraceSource check TraceInternal.UseGlobalLock (also referenced by Trace.UseGlobalLock) which is determined by the system.diagnostics/trace/useGlobalLock configuration value:
I’ve been performance testing a WCF service recently and working away at the bottlenecks in the system. After fixing a few performance issues in the components behind the service endpoint (service implementation and beyond), I was still getting really bad throughput calling the distributed service. The service in this case is hosted on a Windows Server 2003 VM. While it is not on physical hardware, I should be able to achieve better performance than the results I was getting.
After 90 seconds into a load test, the resources on the server got saturated and performance dropped through the floor. After this happened for a minute or so, timeouts and security negotiation failures occurred and test executions essentially halted for the remainder of the load test. I noticed that once service requests were no longer being processed that the server was no longer stressed (CPU dropped back down to normal).