As a conclusion to the series of articles which started with a discussion on the various ways one can perform simple validations on method parameters using PostSharp, I thought I’d share my final thoughts on the matter as well as what I’ll be ultimately using my in own environment.

My primary concern with the original solution has had to do with both the signature of the actual base validation method as well as how we should pass the parameters we’re validating to it. Specifically, I wanted to make sure that the solution wasn’t introducing an excessive amount of boxing into code, which was a valid concern given the weak nature of the validation method’s signature (i.e. the fact that the validation method accepts the parameter as a System.Object type).

What We Have So Far

First of all, the reason why this is even important is because of the potential widespread usage of  the validation method — if this is how we’re checking if something is null or not, then in we can expect a very large presence of such a method in a typical non-trivial application.

The original abstract validation method had the following signature:

1
public abstract void Validate(object target, object value, string parameterName);

The second parameter (value) represents the value of the parameter we’re validating, and it is the focal point of the topic at hand.

Let’s then go over some quick points that concern the implications of using a validation method with such a signature:

  • Because the parameter type for the parameter’s value is System.Object, and not a generic type parameter, parameters which are value types will need to be boxed before being able to be provided to the method.
  • Furthermore, if the parameter we are validating is a generic type parameter, then if at run-time that generic type parameter is substituted for a value type, that too will need to be boxed.
  • While emitting a box instruction while a value type (or generic type parameter substituted with a value type) is on the stack incurs the traditional performance penalty associated with boxing operations, doing the same with a reference type (perhaps originally a generic type parameter substituted with a reference type) on the stack is essentially a nop.
  • Because we’re weaving in raw MSIL here, we’re responsible for supplying the boxing instructions; this won’t be taken care of for us since the compiler is out of the equation…so we need to cover all these different scenarios.

My Approach and Strategy

After some review, I’ve settled on an approach in regards to what the validation method signature should look like as well as the strategy involving how we get the parameter value we’re validating to the method.

First of all, an ideal solution would be to not depend on a weakly based validation method, but rather make use of a validation method that accepts generic type parameters. While this would be cool, it’s actually not the most simple of endeavors. The generic type parameter can be defined on different levels (e.g. type vs method), and how do we call the method if we have no generic type parameter (i.e. the parameter we’re validating is a standard, non-generic type in its original form)?

Those different scenarios require you to do different things, and although I was able to pass some, I lack the inside knowledge of the PostSharp SDK to be able to cover all of them.

In the grand scheme of things, however, the approach I ultimately ended up using has proved to be more than adequate for the job. Given the body of work I’ve been involved with and the types of entities requiring validation, I found that the benefits that would be gained from the ideal solution to be extremely negligible in substance; therefore, it is the most justifiable approach when my own time is weighed as a factor in the equation.

The signature for the validation method ended up staying the same as it was originally. The type I am using for the parameter value remains System.Object. My strategy in regards to how I’m invoking the method is the following:

  1. If the parameter being validated is explicitly a value type, then it (of course) gets boxed before being passed to the validation method.
  2. If the parameter happens to a generic type parameter, then we only box it if it contains no generic constraints which guarantee the type substituted at run-time to be a reference type (e.g. a specific reference type or the class keyword).
  3. Furthermore, (further down the inheritance chain) any validation attributes which perform types of validations that only make sense when being conducted on reference types (e.g. NotNull) have built-in compile-time validation to ensure we it’s not being applied to any explicit value types.

The vast majority of parameter validations occurring in my main body of work are done on generic type parameters. Many times (though not always) these type parameters are constrained to be reference types. In the end, boxing almost never occurs, therefore this is quite acceptable to me.

To implement this strategy, I simply added a property named RequiresBoxing to the EntityDescription type I talked about in a previous article, which I set during the initialization of the EntityDescription object like so:

1
2
3
4
_requiresBoxing
  = entityType.IsValueType
    || (entityType.IsGenericParameter
        && !entityType.GetGenericParameterConstraints().Any(c => c.IsClass));

If, in the future, I ever found myself having to perform (let’s say) numerical-type validations on (obviously) value types, I’d probably create a separate class of validation advices which were specific in the value type being validated.

 

The Weak Event Pattern

Microsoft made a good move with .NET 4.0 when they introduced the concept of a weak event pattern as a technique meant to address a glaring WPF memory leak issue that had the tendency of arising under certain conditions. You can read all about it here, but that aside, it helps to actually understand the problem that merited the creation of these techniques.

Memory leaks arising from lingering event subscriptions is an often misunderstood issue. Remember, in a CLR environment, memory is managed by the CLR’s own automatic garbage collector. In this context, a memory leak refers to a nonoccurrence of the collection of one or more objects which otherwise should have been collected according to standard expectations. Thus, assuming that the CLR garbage collector is “perfect” (it is quite good), the notion of a memory leak occurring should be a foreign one. But it can happen, and one of the preconditions for that scenario has to do with event subscriptions.

It’s not as simple as the presence of lingering event subscriptions, however. Let’s go over some incredibly rudimentary terminology:

  • One object exposes events that other objects can subscribe to. This object is the publisher of the event.
  • The objects which attach handlers to the publisher are subscribers of the event.

Why It’s Needed

From the subscriber’s point of view, I should always be sure to detach all handlers previously attached to the publisher as soon as it is practical. In fact, this requirement makes me a good candidate for a purely-managed focused implementation of the IDisposable interface. However, if I, as the subscriber, shirk these responsibilities, we aren’t necessarily guaranteed a memory leak. It is only when the lifetime of the publisher exceeds that of the subscribers will we have ourselves a memory leak. This is more of a matter-of-fact observation than a deep deduction; because subscribers have handlers attached to the publisher, the publisher will maintain a reference to those subscribers, and thus those subscribers cannot be collected.

This becomes a problem, especially when the subscribers need to die. An example that can occur is when you have some type of collection view or view model that contains children views or view models. This collection publishes an event that its children subscribe to. Whenever a child gets removed from the collection, we would very much want that child to be collected, especially if it is something significant such as a graphical view. Unless that child’s handler is detached, however, this collection won’t occur.

Many times the objects involved will lack the level of intimacy with each other required for one to know that that it needs to tell the other that it needs to clean up its event handlers. One of the constructs Microsoft developed in order to account for situations where associated objects lacked understanding of the internal behavior and structure of each other was the IDisposable interface. Unfortunately, WPF makes little to no use of this approach, which I believe is both odd and a major cause of all of these problems. Therefore, implementation of this interface will do nothing for you and your WPF-specific objects.

How to Use the Pattern

Making use of the weak event pattern is a rather simple affair:

  1. Create a new manager class derived from WeakEventManager
  2. Create a new implementation of IWeakEventListener
  3. Wire up the IWeakEventListener appropriately

Although it is a simple few steps one must complete, it is a bit burdensome to have to create new classes for each event or group of events coming from a specific source which we want to listen to weakly. There is a heavy use of static methods involved with this pattern as well, further restricting our options in regards to inheritable functionality.

Why does the pattern require specific types defined for specific events? The answer has to do with performance. It is possible to create a generic WeakEventManager, but your performance will suffer. Indeed, with .NET 4.5, we will see a generic WeakEventManager introduced by Microsoft. However, with it comes the warning I just provided: use of the generic variant of WeakEventManager will result in decreased performance. The performance trade-off most likely has to do with reflection cost as well as expectations by internal WPF components and processes which may have been optimized around an expectation of discrete manager types.

A Generic IWeakEventListener

With IWeakEventListener implementations, however, the story is different. Here we can easily devise a generic variant which we can easily use without worry of performance implications.

Here is an example of a generic IWeakEventListener implementation:

WeakEventListener.cs 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
/// <summary>
/// Provides a generic weak event listener which can be used to listen to events without a strong reference
/// being made to the subscriber.
/// </summary>
/// <typeparam name="TEventArgs">
/// The type of <see cref="EventArgs"/> accepted by the event handler.
/// </typeparam>
public class WeakEventListener<TEventArgs> : IWeakEventListener
    where TEventArgs : EventArgs
{
    private readonly EventHandler<TEventArgs> _handler;
 
    /// <summary>
    /// Initializes a new instance of the <see cref="WeakEventListener{TEventArgs}"/> class.
    /// </summary>
    /// <param name="handler">The handler for the event.</param>
    public WeakEventListener([NotNull]EventHandler<TEventArgs> handler)
    {
        _handler = handler;
    }
 
    bool IWeakEventListener.ReceiveWeakEvent(Type managerType, object sender, EventArgs e)
    {
        TEventArgs eventArgs = e as TEventArgs;
 
        if (null == eventArgs)
            return false;
 
        _handler(sender, eventArgs);
 
        return true;
    }
}

Using this generic variant is quite simple: add it as a member to a subscriber class, and make a call to an appropriate WeakEventManager.AddListener method in order to register the weak event listener from the subscriber during its initialization.

 

Recently I’ve been involved with creating a general purpose library (to be used at my company) that includes, among other things, a nice API for creating and manipulating Task Dialogs from .NET-land. As with everything unmanaged, implementing it not only yields the benefit of being able to use it, but you tend to learn some interesting things in the process.

Unable to find an entry point named ‘TaskDialogIndirect’…

It won’t take long until you run into this lovely issue. Shouldn’t be much of a surprise to anyone familiar with such matters, but, in case you didn’t know, the TaskDialog API is only available on version 6.0 and up of the Common Controls library (on post-XP of course…I think there’s also a v6 on XP, but it is different version nonetheless). If you’re running Windows Vista or 7, then you have this library installed by default, however it is the older version 5.8 of the Common Controls library that will be loaded by default.

We have multiple versions of comctl32.dll because of that libraries participation in the side-by-side assembly system, which was Microsoft’s answer to DLL Hell and somewhat of a precursor to .NET’s Global Assembly Cache. If we intend on having our .NET application load the correct version of comctl32.dll, then we’re going to have to participate in that system, which then requires us to provide a manifest dictating the version we need.

Providing a manifest is simple if your end-product is a .NET executable: you simply add an application manifest which will isolate the application and load the appropriate DLL versions at run-time; however, it is not as straight forward if you’re writing a .NET library, since simply embedding a manifest into the DLL exerts no influence at run-time on an executable referencing it. Specifically, the type of manifest we are talking about here is an application manifest (as opposed to an assembly manifest). Visual Studio offers support in the C# project properties designer for embedding application manifests into application projects, but not libraries.

Activation Contexts

Requiring that all applications referencing your library also include their own correctly-made application manifest if they want a specific subset of features provided by your library to work is an exceptionally unrealistic requirement. If we cannot automatically affect the run-time so that the proper unmanaged DLL’s are targeted, then we are providing functionality that is not guaranteed to work. As such, such functionality cannot exist in any proper library, and would have to be removed. Luckily, we do have the power to influence the run-time from the confines of a .NET DLL, and the way we do it is by making use of activation contexts.

By activating an activation context, we are essentially telling Windows to redirect the running application to a particular DLL version according to a manifest of our choice. In fact, activation contexts are the very engines that power the side-by-side assembly system. When an application needs to make a call to a library or create a process or other resource, Windows checks that application for the presence of an application manifest, using the information found in that manifest in order to populate an activation context to use to guide that call to the correct destination.

Normally, activation contexts are managed entirely by the Windows system; however, in our case, we’re going to be rudely intruding into that system so that we can perform some actions not taken by the Windows system. Specifically, prior to our P/Invoke call to TaskDialogIndirect, we’re going to create and activate an activation context that will redirect that call to the proper version of comctl32.dll. There is official precedence for this activity: the Windows Forms library does exactly what I’ve just described when you make a call to Application.EnableVisualStyles.

Microsoft provides some documentation on how we can do that from a managed environment in the form of a KB article. I’m not going to provide a complete walkthrough on the process here, as the KB article covers most of it, but I do want to address one of the limitations of the approach offered by that article. In particular, I’m referring to how the approach offered by the KB article requires that an actual manifest file be present on disk. Relying on an external non-binary file for something that shouldn’t be configurable by an end-user anyway is clunky and not desirable.

Luckily, we can do better than that. Instead of using a physical manifest file as a source for the activation context, we can create an activation context using a resource embedded in our DLL instead. And we can do all of this simply by configuring the ACTCTX structure populated during the process differently.

Activation Context Creation Using a PE Image Source

The application manifest can typically be found in the .rsrc section of a PE image, where it exists as a resource like any other. With Visual Studio, you can add an application manifest to your project (let’s call ours Library.manifest), and then enable the embedding of that manifest into project’ s output through an option located in the project properties designer. However, no such option exists for DLL projects, but this doesn’t matter since we can get Visual Studio to do what we want anyways. Open up your *.csproj file with the XML editor, and add the following to the first <PropertyGroup> in the file:

1
<ApplicationManifest>Library.manifest</ApplicationManifest>

This will result in your manifest file being embedded into the DLL compiled from your project. You will see that the manifest is embedded, not as a .NET resource, but as a true native resource of the RT_MANIFEST type. The resource ID of the manifest should be 2. This is the standard resource ID for all manifests found in DLLs. In fact, whenever a native DLL containing an embedded manifest resource with an ID of 2 is dynamically loaded at run-time, the operating system loader automatically creates an activation context using that manifest. It does this so it can then proceed to load all dependencies of that DLL without issue.

This obviously is not going to happen for our DLL, however, since this our DLL is a managed DLL, of course, and it is loaded a bit differently. Regardless of this, we still need our manifest embedded in our DLL so that it can be sourced appropriately by the activation context we are going to be creating.

Following this, we need to change some of the code you may have picked up from that KB article. Specifically, we need to populate the ACTCTX structure that gets provided to CreateActCtx a bit differently.

  1. The KB article sets the lpAssemblyDirectory to the directory containing the current assembly. Although the KB article is throwing terms like “security hole” at us in the nearby code comments, we’re actually going to remove this assignment, and leave lpAssemblyDirectory unset. I don’t believe this is documented anywhere, but in actuality, I believe the Activation Context API ignores lpAssemblyDirectory when loading the manifest in the way we are going to be doing it.
  2. Next, the KB article has us setting the dwFlags field to ACTCTX_FLAG_ASSEMBLY_DIRECTORY_VALID. We actually want to set it to ACTCTX_FLAG_RESOURCE_NAME_VALID instead (which is 8, by the way).
  3. The provided example sets the lpSource field to the path of the physical manifest file. Since we don’t have one of those, and because our manifest file is embedded in our DLL, we actually want to set lpSource to the path to our DLL file.
  4. Finally, we need to tell Windows what the resource ID of our manifest is, and we provide that information by setting the lpResourceName member using MAKEINTRESOURCE.

The last step in the steps listed above requires us to set lpResourceName using a value that we derive from MAKEINTRESOURCE. While that’s not a very tall order when we’re developing in C++, how do we do this from a managed, C# environment? The simplest way is to actually change the return type for this field in our ACTCTX structure definition.

Looking at the KB article, the sample ACTCTX structure they provide looks like the following:

1
2
3
4
5
6
7
8
9
10
11
private struct ACTCTX
{
   public int       cbSize;
   public uint      dwFlags;
   public string    lpSource;
   public ushort    wProcessorArchitecture;
   public ushort    wLangId;
   public string    lpAssemblyDirectory;
   public string    lpResourceName;
   public string    lpApplicationName;
}

Change the return type of lpResourceName to an IntPtr (!!); yes, an IntPtr, so that we have the following:

1
public IntPtr    lpResourceName;

And then, remembering that the ID for our manifest resource is 2, we can then populate our structure like so (substituting “ClassName” for the name of the class where this is being done of course):

1
2
3
4
5
6
7
ACTCTX context = new ACTCTX
                     {
                         cbSize = Marshal.SizeOf(typeof (ACTCTX)),
                         dwFlags = ACTCTX_FLAG_RESOURCE_NAME_VALID,
                         lpSource = typeof(ClassName).Assembly.Location,
                         lpResourceName = (IntPtr) 0x2
                     };

Providing this structure to CreateActCtx will then create a new activation context based on our embedded manifest. Well almost. Did you happen to create your application manifest using Visual Studio’s application manifest file template?

Fix Your Application Manifest

I found CreateActCtx to be extremely touchy when it comes to the actual form of the application manifest itself, and that the manifest generated by Visual Studio was utterly incompatible with it. Attempted to create a new activation context using a manifest like that would result in CreateActCtx returning an error code.

The manifest generated by Visual Studio contains a ton of XML namespace attributes which may or may not make CreateActCtx puke. I say “may or may not” because I’m not sure if it was these namespaces or another part of the stock manifest content that caused it to fail. But, that doesn’t really matter. Here’s a cleaned up manifest file that is guaranteed to work for you:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
  <noInherit/>
  <description>
    Your manifest
  </description>
  <assemblyIdentity version="0.0.0.1"
                    name="Your.Library"
                    type="win32"
                    processorArchitecture="*"
                    />
  <dependency optional="yes">
    <dependentAssembly>
      <assemblyIdentity
       type="win32"
       name="Microsoft.Windows.Common-Controls"
       version="6.0.1.0"
       publicKeyToken="6595b64144ccf1df"
       language="*"
       processorArchitecture="*"/>
    </dependentAssembly>
  </dependency>
</assembly>

You can add other sections, such as a <compatibility> section if you’d like, and they should work fine. Also, the <description> element is not required for this to work (hell, I’m not even sure if that’s a standard manifest element…all I know is that I’ve seen it in a number of in-use manifests originating from Microsoft). After you do all of this, it should start to work with CreateActCtx.

But hey…this is where the fun begins. You shouldn’t get the false impression from all of this that you can now go willy-nilly and be able to act foolishly without suffering consequences. I heavily tested my code and came across a few “edge” cases that you need to be very careful about, as you can easily cause issues when acting within one of these self-made activation contexts.

Who Doesn’t Love SEHExceptions?!

You know that when you’re getting structured exception handling errors in managed code, that you’re doing something very special. Now, before I go on, let me state that I tested my use of activation contexts very heavily and found them to be very stable. However, a factor allowing me to come to that conclusion is the fact that the code executing within the activation context is very stable. If your code is anything less than that, or if it is operating in an extreme environment, you probably want to exercise caution.

Through my testing, I did identify an issue that I do not altogether understand, due to the fact that the problem was occurring deep in unmanaged land, and I couldn’t come across much material relevant to my issue.

The problem I encountered was an interesting one. As we know, we’re dealing with Task Dialogs here. While the Task Dialog is open, the thread that opened it will be blocked until it is closed. Well, not entirely. While blocked, however, the same thread is going to handle any callbacks fired by the Task Dialog. Because we require an activation context to open the Task Dialog, the call to open the Task Dialog is done within a using block for the disposable class which handles the activation context creation and activation. When we hit the finally block of that using block, it’s going to make a call to DeactiveActCtx.

I found that if I threw an exception while handling a callback from the Task Dialog, that an SEHException would get thrown by DeactiveActCtx during the disposal of the class that created the activation context. The activation context would essentially become impossible to deactivate, indicating perhaps that somehow the activation context stack was corrupt. The error code for the SEHException was 0×80004005, or: External component has thrown an exception. Throwing an exception within the activation context, but not during the handling of a callback, would cause no problems when deactivating the context.

So…if anyone else has this issue, then I would advise to make sure the dialog is closed first before throwing the exception. My Task Dialog class has a Closed event, so I would simply schedule the exception to be thrown in an event handler for that, and then proceed to close the dialog from the callback. The context would deactivate with no issue, and the exception could then get thrown and thrown in the developer’s face.

 

With the release of .NET 4.0, Microsoft made some large scale changes to the framework’s security model. Concluding that the legacy model was little understood by both developers and IT administrators, Microsoft decided to do what is normally a very prudent action: they decided to simplify it.

In previous versions of the .NET Framework, the security model was tangible in the form of Code Access Security policies. Each policy was a set of expressions which used information associated with assemblies in order to determine which code group said assemblies belonged to. Each code group would contain a permission set, which would then be referred to by code access demands made in response to attempts to perform privileged actions.

.NET 4.0 got rid of all that nonsense with the introduction of level 2 transparent code (although technically, CAS still exists; what’s been eliminated is CAS policy). Basically, under this system, machine-wide security policy is off by default, and all desktop applications run as full trust. Code is either transparent, safe-critical, or critical; assemblies lacking annotation will have all their types and members treated as being critical if the assembly is fully trusted, or transparent if the assembly is partially trusted only.

Instead of worrying about having to make a bunch of strange security-related assertions and demands, all one needs to do to run under the new model is annotate their code appropriate using the three main attributes: SecurityTransparent, SecuritySafeCritical, and SecurityCritical. Sounds good; simple is better, but don’t use this new system. It isn’t ready for the real-world yet.

This article is not meant to actually criticize the substance of the new security model. I think the model is a huge improvement, but there’s a a few specific issues that cause me to view it as a mess as it now stands. Before we get into one of those issues, let’s look at reality first.

Most of the .NET Framework Doesn’t Use It

One of the ways I gauge the viability of new technologies from Microsoft is by trying to get a handle on whether or not Microsoft itself it. This approach has never failed me, and has saved me from a countless number of time sinks that would have affected endeavors both professional and personal. So, in order to check out the immediate viability of the new level 2 transparency model, let’s look at the primary .NET Framework assemblies and see whether or not they use it.

We can tell whether or not a particular assembly is using level 2 transparency by taking a look at the assembly’s metadata. If the assembly is operating under the level 2 transparency model, it would contain a SecurityRulesAttribute with a SecurityRuleSet.Level2 value being passed to that. However, it is important to remember that level 2 transparency is used by default, so if the attribute is not declared, then we should assume it to be used level 2. It is against “guidelines”, however, not to declare this attribute.

If the assembly is operating under the level 1 transparency model, the SecurityRulesAttribute is declared with a SecurityRuleSet.Level1 value passed to it instead.

Let’s then see what we come up with from looking at some of the core .NET Framework assemblies. In order to do this, I wrote a program which enumerated over every single assembly installed to the standard .NET Framework 4.0 installation directory, checking the SecurityRulesAttribute attribute present on each one.

The results are interesting:

  • Total Level 2
    77 assemblies
  • Total Level 2 with Obsolete Security Actions
    (this means the assembly included a SecurityPermissionAttribute with a SecurityAction value deemed obsolete under the new model)
    45 assemblies
  • Total Level 2 Lacking Assembly-wide Notation
    (this means that no  SecurityTransparentAttribute, SecurityCriticalAttribute, or AllowPartiallyTrustedCallersAttribute was found on the assembly metadata)
    53 assemblies
  • Total Level 1 
    55 assemblies

The majority of Level 2 assemblies were completely lacking notation (i.e. no SecurityRulesAttribute), which goes against Microsoft’s own guidelines. As we can see, however, there are a number of level 2′s, however the majority of the level 2′s are insignificant (except for mscorlib.dll and System.Security.dll), whereas the important .NET assemblies are what constitute the Level 1 group.

Here’s the list of Level 1 assemblies:

  • System.AddIn
  • System.Configuration
  • System.Core
  • System.Data.DataSetExtensions
  • System.Data
  • System.Data.Entity.Design
  • System.Data.Entity
  • System.Data.Linq
  • System.Data.OracleClient
  • System.Data.Services.Client
  • System.Data.Services
  • System.Data.SqlXml
  • System.Deployment
  • System.DirectoryServices.AccountManagement
  • System.DirectoryServices
  • System.DirectoryServices.Protocols
  • System
  • System.Drawing
  • System.EnterpriseServices
  • System.IdentityModel
  • System.Net
  • System.Runtime.Serialization
  • System.ServiceModel.Activation
  • System.ServiceModel
  • System.ServiceModel.Web
  • System.Transactions
  • System.Web.ApplicationServices
  • System.Web
  • System.Web.Entity
  • System.Web.Mobile
  • System.Web.Services
  • System.Windows.Forms.DataVisualization
  • System.Windows.Forms
  • System.WorkflowServices
  • System.Xml
  • System.Xml.Linq
  • PresentationCore
  • PresentationFramework.Aero
  • PresentationFramework.Classic
  • PresentationFramework
  • PresentationFramework.Luna
  • PresentationFramework.Royale
  • PresentationUI
  • System.Printing
  • System.Windows.Presentation
  • UIAutomationProvider
  • UIAutomationTypes
  • WindowsBase

These are some of the most important assemblies in the BCL, and they’re all using the legacy security model.

But you know, all of this is just an observation, it doesn’t mean anything. Indeed, I was just recounting reality right now. What I just talked about isn’t the primary reason not to use it. What caused my jaw to drop was when I found out about the issues Visual Studio’s code coverage had with it.

Visual Studio Code Coverage Doesn’t Work With It

…if you’re assembly is annotated, at least. Deal breaker for me.

By “annotated”, I mean your assembly is set to either be SecurityCritical, SecurityTransparent, or AllowPartiallyTrustedCallers

If you do any of those, and then annotate the rest of your code properly using FxCop and SecAnnotate, you will get the following error if you run any unit tests with Visual Studio’s built-in code coverage:

 System.TypeInitializationException: The type initializer for ‘xxx’ threw an exception. —>
System.MethodAccessException: Attempt by security transparent method ‘Microsoft.VisualStudio.Coverage.Init_[...].Register()’ to call native code through method ‘Microsoft.VisualStudio.Coverage.Init_[...].VSCoverRegisterAssembly(UInt32[], System.String)’ failed.
Methods must be security critical or security safe-critical to call native code.

The reason why this happens is because during instrumentation a bunch of unattributed methods get inserted that make P/Invoke calls. I find it a little ridiculous that Microsoft’s own tools don’t support this new model, and especially one that I feel is rather critical to the development process. Microsoft clearly does not use its own tools for code coverage analysis, at least with the assemblies that are level 2.

So, my advice if you are starting a new library or whatnot: If security is a priority with you (like if you want to allow partially trusted callers), then use the legacy model. If it isn’t, then you can declare level 2 compatibility in your assembly metadata, but don’t add any other level-2 specific assembly-wide attribute until the model is more supported.

© 2012-2013 Matt Weber. All Rights Reserved. Terms of Use.