Matt Weber

I'm the founder of Bad Echo LLC, which offers consulting services to clients who need an expert in C#, WPF, Outlook, and other advanced .NET related areas. I enjoy well-designed code, independent thought, and the application of rationality in general. You can reach me at


Resource mailboxes play host to a number of room resource specific settings and policies, such as the window (in days) in which you can “book” the resource, as well as a many-layered system of policies and permissions which affect who may use them as well as their experience in doing so. Your application may have a need to read and synchronize with these settings.

Retrieving these settings using PowerShell is straightforward, however that doesn’t stop the topic from being covered incessantly by others on the Internet. Although we can call PowerShell cmdlets from code, the only real solution here is to retrieve the settings using the one and only proper and direct interface to Exchange, i.e. MAPI. To that, however, it helps to know how and where these settings actually get stored in Exchange.

This article examines how these room resource specific settings are actually structured and stored in Exchange, and how you can go about retrieving them. The substance of this article is a result of research (necessitated by the lack of official documentation on the subject) into the inner-workings of Exchange in regards to how policies affecting resource mailboxes are organized.

1. Meet the Resource Settings

Before we get into the nitty-gritty, it probably would help to briefly go over exactly what I mean when I refer to “resource mailbox settings”. Using the Exchange Management Console, we can easily view them by navigating to Recipient Configuration -> Mailbox, right clicking on a resource mailbox, and clicking on Properties. Many tabs will be presented to you; however, for the purposes of this article, we will only be concerning ourselves with five of them.

The first is the Resource General tab, which exposes capacity and custom property settings for the resource mailbox:

Resource General via the EMC

Resource General

Next, we have the prominent Resource Policy tab, which features the most numerous and interesting settings:

Resource Policy via the EMC

Resource Policy

Continuing on, we have the Resource Information tab, which is less about “information” and more about binary settings in relation to the resource mailbox:

Resource Information via the EMC

Resource Information

Ever persistent, we move next to the Resource In-Policy Requests tab, which itself does well to explain what it is that it does:

Resource In-Policy Requests via the EMC

Resource In-Policy Requests

Our journey coming to an end, we arrive at the last stop, the Resource Out-Of-Policy Requests tab:

Resource Out-Of-Policy Requests via the EMC

Resource Out-Of-Policy Requests

The settings that were shown in the preceding images are, collectively, what I will be referring to as the resource settings of a resource mailbox. It is our intention to learn how we might be able to exploit these settings during interactions with an Exchange server which operate on a much more basic level than those that take place through the use of the Exchange Management Console.

There are some very large companies out there which make use of resource settings in order to control the access to and use of various company assets; if your product operates in a space which leverages these assets in a similar manner, it may prove to be desirable to be able to synchronize with these settings so their methods of administration need not change so much.

Firstly, we must first determine if what we wish for is indeed possible. As we can see above, we are looking at the settings via the EMC. We also know that we can manipulate these same resource settings from a PowerShell interface.

An example that exists outside of the Exchange family of software is Microsoft’s own Forefront Identity Manager, which also mirrors the interface and settings exposed by the EMC in a similarly looking way.

Taking all of that into account, then, there must be a way for us to get at those values as well.

As an aside: the purpose of this article is not to cover what each and every resource setting does, as there are a billion articles online that do that. Therefore, I’m assuming that the reader is either intimate in regards to the workings of each setting, or that they at least find them to be obvious in their purpose.

2. Where the Resource Settings Call Home

Although the resource settings shown above all seem to be associated with a resource mailbox in some way, we still really have no idea where they actually get stored. Certainly none of them can be categorized as the typical types of data one expects to encounter when aimlessly trawling through a MAPI store, folder, etc.

If it is our intent to be able to retrieve a resource mailbox’s resource settings, then one of the central questions to be answered is where these resource settings are actually stored in Exchange, as well as what the nature and structure of that storage is.

So, where do the resource settings for a particular resource mailbox come from? Well, it may (or may not, depending on your level of jadedness) surprise you to know that these resource settings do not come from any single location, but rather a couple of sources; namely, the address entry associated with and calendar folder contained by the resource mailbox.

Knowing this, let’s group all of the resource settings known to us into two separate enumerations based on each setting’s origination.

2.1 Address Entry Based Settings
  1. Resource capacity
  2. Resource custom properties
  3. Resource type
  4. Resource delegates (although information may be incomplete…see section on Address Entry Based Settings for more information)
2.2 Calendar Folder Based Settings
  1. Enable Resource Booking Attendant
  2. Delete attachments
  3. Delete comments
  4. Delete subject
  5. Delete non-calendar items
  6. Add organizer’s name to subject
  7. Remove private flag on accepted meetings
  8. Send organizer info upon declining of request due to conflicts
  9. Add additional text to response
  10. Additional text
  11. Mark pending requests as tentative
  12. Allow conflicts
  13. Allow recurring meetings
  14. Schedule only during working hours
  15. Enforce scheduling horizon (booking window)
  16. Booking window
  17. Maximum duration
  18. Maximum number of conflicts
  19. Allowed conflict percentage
  20. Forward meeting requests to delegates
  21. In-policy requests from all users automatically approved
  22. List of users whose in-policy requests are automatically approved
  23. In-policy requests from all users can be approved
  24. List of users whose in-policy requests can be approved
  25. Out-of-policy requests from all users can be approved
  26. List of users whose out-of-policy requests can be approved
  27. Resource delegates (most complete set of information, however more work is required in order to get at the data…see the section on Calendar Folder Based Settings for more information)

And there we have a complete listing of all the public resource settings as well as where we might find them. Cool, but simply knowing where they are isn’t going to cut it for us. We need to know exactly where and how each setting is stored in its respective container if we wish to be able to access their value.

The address entry based settings are simply stored as separate, run-of-the-mill, MAPI properties (albeit “non-standard” MAPI properties) directly on the address entry. The calendar folder based settings are a whole other story.

Because it is the simpler of the two, let’s get into the details of the various address entry based settings first.

3. Address Entry Based Settings

Although it would seem to make more sense if all the resource settings were stored in a single location, in fact it is their use which dictates where they can be found. The presence of a resource’s capacity and custom properties settings on its address entry seems to occur merely to accommodate the Room Finder feature one can find in Outlook.

The set of criteria one may use in order to refine resource searches is a direct reflection of what resource settings are exposed on each of the resource mailbox’s associated address entries. This makes sense, because when we’re using the Room Finder, we are searching across address entries in an address list, not message stores.

3.1 Resource Capacity

The capacity of a room resource can be found by accessing the PR_EMS_AB_ROOM_CAPACITY MAPI property located on its associated address entry.

This property is of the type PT_LONG, its canonical name is PidTagAddressBookRoomCapacity, and it has a property tag of 0×08070003.

3.2 Resource Custom Properties

This fun little creature is a bit more complicated than the resource’s capacity property, and it can actually be found in a couple of places on the address entry, with one of the places being more of a reference to the resource’s custom properties as opposed to the actual entity that defines them.

The MAPI property which defines the resource’s custom properties has neither a name nor a canonical name. It certainly has a property tag, however, and that property tag is 0x0806101F.

This property is of the type PT_MV_UNICODE, which basically means that we are dealing with an array of strings, with the property being “multivalued” and all.

This array will be composed of an entry for each custom property as well as an additional entry at the end of the array which reflects the name of the schema which the preceding custom properties fall under. Because the name of the schema that a custom property falls under must match the type of resource it is meant to be assigned to, the name of the schema should typically always be Room in the case of the resource mailbox being of the room type.

For example, if we use the resource mailbox pictured in the screenshots shown in the previous section, we would get an array of the following composition if we were to access its custom properties:

3.3 Resource Type

An important little fact in regards to a resource mailbox is its type. Is it a piece of equipment, or a room!?! Although you can typically rely on room resources being located underneath the All Rooms address list and equipment resources being located underneath the All Equipment address list (there is one of those…I think?), that’s taking way too big of a leap of faith to be considered a sane approach.

The MAPI property housing the resource’s type, much like the property housing its custom properties, has neither a name nor a canonical name. Its property tag, however, is known to us, and it is 0x0808101F.

It too is of the PT_MV_UNICODE type, although I can’t for the life of me figure out why. It should always contain only a single entry which uses a format of ResourceType:[ResourceTypeName] (drop the brackets, obviously!).

In the case of a room resource mailbox, the value of this property should always be ResourceType:Room.

3.4 Resource Delegates

The users assigned as resource delegates to the resource mailbox can indeed be found in the address entry associated with the resource mailbox, however it is not guaranteed to actually be a complete listing of those users.

The EMC, for example, most assuredly does not get the list of users it uses to populate the Resource Delegate list view from the address entry. However, I bring it up because it is a very simple way to get at those delegates, and it has built-in support when using MAPI client interfaces such as Redemption.

For a better way to get at the list of resource delegates, see the Calendar Folder Based Settings section coming up in a little bit.

The delegates for a resource can be found by accessing the PR_EMS_AB_PUBLIC_DELEGATES located on the resource’s associated address entry. Note that it says PUBLIC in the name — there is a good reason for that, as some delegates are indeed “private”.

It is of the type PtypEmbeddedTable, its canonical name is PidTagAddressBookPublicDelegates, and its property tag is 0x8015000D.

If you are using Redemption, you will find this property in the Delegates property exposed by the relevant RDOAddressEntry object instance.

4. Calendar Folder Based Settings

Now that we’ve gone over the relatively few address entry based settings that exist, it is time to move on to the fun stuff.

The majority of the resource settings can actually be found inside its Calendar folder. They do not, however, exist as properties on the calendar MAPI folder itself; rather, they can be found in a special message located in the calendar’s folder-associated information table

The particular message which houses the resource settings is one that can be found in the associated contents table of any calendar folder, regardless of the type of message store that contains it. This special message can be identified by its message class, which is IPM.Configuration.Calendar. Only one will exist in a folder’s associated contents table, making the use of the message class as a filtering agent a valid action.

That should be all the information you need in order to find this file. If you happen to be using Redemption as your MAPI client, however, you can get at the messages stored in the calendar’s associated contents table by accessing the HiddenItems property exposed by the relevant RDOFolder object instance.

Looking at the IPM.Configuration.Calendar message using MFCMAPI with the message of interest highlighted.

Looking at the IPM.Configuration.Calendar message using MFCMAPI

Once you find this message, you’ll see that it is like any other message, in that it has a plethora of MAPI properties. So what are the properties where we can find the values for our resource settings?

Well, if all you do is simply look over all the available properties by their name, you’re going to end up empty handed.

The resource settings, both fortunately and unfortunately, happen to all be stored in a single MAPI property. You may feel that this approach to the storage of the resource settings is a bit strange and out of place in comparison to how data is typically organized in a MAPI store, and I’d have to agree with you. But this is reality, and idle dreaming won’t do well to get us very far in what we wish to do.

The resource settings are stored in a dictionary-like structure which is defined in XML markup. The schema that it uses is…very interesting. I normally like interesting things, but all that serves to do here is make the process of deserializing the data more problematic. We’ll talk a bit more about that in a second; let’s discuss where this data lives first.

You can find the resource settings in the calendar configuration message by accessing the MAPI property named PR_ROAMING_DICTIONARY.

This is a PT_BINARY typed property, its canonical name is PidTagRoamingDictionary, and its property tag is 0x7C070102.

If we consult Exchange Server Protocol documentation, we learn that this particular property is one that contains an entity known as a dictionarystream. This is basically a binary stream which contains a XML document (UTF-8 encoding). The schema that is used for the XML dictionary is as follows:

<?xml version="1.0" encoding="utf-8"?>
<xs:schema targetNamespace="dictionary.xsd"
  <xs:element name="UserConfiguration">
        <xs:element name="Info">
            <xs:sequence />
            <xs:attribute name="version"
        <xs:element name="Data">
              <xs:element name="e"
          <xs:unique name="uniqueKey">
            <xs:selector xpath="e" />
            <xs:field xpath="@k" />
  <xs:simpleType name="VersionString">
    <xs:restriction base="xs:string">
      <xs:pattern value=".+\.\d+" />
  <xs:complexType name="EntryType">
    <xs:sequence />
    <xs:attribute name="k"
                  type="ValueString" />
    <xs:attribute name="v"
                  type="ValueString" />
  <xs:simpleType name="ValueString">

        <xs:restriction base="xs:string">
          <xs:pattern value="\d+\-.*" />

The place where our resource settings live is within the <Data> element, which is where all of the dictionary name-value pairs go. Now, these do look a bit different from what we’re used to from a .NET perspective, but, nevertheless, there is order to be found amongst the madness here.

Each name-value pair consists of an <e> element and has two attributes: k and v. The k attribute basically acts as the key to the entry, thus it is where you can find the name of the particular setting that the name-value pair represents. The v attribute is obviously then the value portion. That is simple enough, however the complexities arrive in how the value is actually expressed with the v attribute.

The value assigned to the v attribute is actually composed of several parts, each of which is delimited from the next by a hyphen. This is known as the ValueString, and it takes on the following shape:

<data type>-<string encoded value>

…at least according to the documentation. In reality, it is typically more complicated than this.

Also according to the documentation, the data type used must be one of three values: 3 (Boolean), 9 (32-bit signed integer), or 18 (string). And once again, the documentation is not exactly on par with reality, as additional data types are indeed used.

In order to cement our understanding of these calendar based resource settings, let’s look at a real dictionarystream coming from a resource mailbox’s IPM.Configuration.Calendar message. Note that this example does not necessarily reflect the values of the settings as per the screenshots shown at the beginning of this article. There is an important reason for this: if a setting is set at its default value, then it will not appear in the dictionary. Here it is:

<?xml version="1.0" encoding="utf-8"?>
  <Info version="Exchange.12" />
    <e k="18-ForwardRequestsToDelegates" v="3-False" />
    <e k="18-AddOrganizerToSubject" v="3-False" />
    <e k="18-ConflictPercentageAllowed" v="9-12" />
    <e k="18-DeleteComments" v="3-False" />
    <e k="18-DeleteSubject" v="3-False" />
    <e k="18-AutomateProcessing" v="9-2" />
    <e k="18-MaximumDurationInMinutes" v="9-1441" />
    <e k="18-AllowConflicts" v="3-False" />
    <e k="18-RequestInPolicy" v="1-18-1-36-6f33f1c8-ba68-4852-8ba3-a727920a5f39" />
    <e k="18-RemovePrivateProperty" v="3-False" />
    <e k="18-ScheduleOnlyDuringWorkHours" v="3-False" />
    <e k="18-EnforceSchedulingHorizon" v="3-False" />
    <e k="18-TentativePendingApproval" v="3-False" />
    <e k="18-OrganizerInfo" v="3-False" />
    <e k="18-AllRequestOutOfPolicy" v="3-False" />
    <e k="18-BookInPolicy" v="1-18-2-36-b867b454-75c4-480f-aba0-47bd5b11d5b7-36-7461ec96-5929-4905-8f73-a4e0abb1ff8e" />
    <e k="18-MaximumConflictInstances" v="9-5" />
    <e k="18-AllBookInPolicy" v="3-False" />
    <e k="18-AddAdditionalResponse" v="3-True" />
    <e k="18-AdditionalResponse" v="18-Additional Test" />
    <e k="18-DeleteAttachments" v="3-False" />
    <e k="18-DeleteNonCalendarItems" v="3-False" />
    <e k="18-BookingWindowInDays" v="9-181" />
    <e k="18-RequestOutOfPolicy" v="1-18-1-36-b867b454-75c4-480f-aba0-47bd5b11d5b7" />
    <e k="18-AllRequestInPolicy" v="3-False" />
    <e k="18-AllowRecurringMeetings" v="3-False" />

As we can see above, there exists a name-value pair for every one of the resource settings we previously declared as being calendar folder based. Most of these properties look fairly normal and easy to digest, however there are a few that live outside the scope of official documentation. Particularly, there appear to be a few settings that have a ValueString assigned to their v attribute that contains a data type not found in the official documentation’s table of possible data types.

An example of this is the value assigned to the RequestInPolicy setting.

If we follow the standard format of a ValueString, then the data type being used here is 1. Unfortunately, there is no data type in official documentation that uses this identifier. Fortunately for us, we have brains, and we can quickly deduce that based on the setting itself (which receives a list of users that can make in-policy requests) that the type of data must be some sort of type of collection of values.

OK, so we’re past the data type: this is a list of some kind. Let’s continue down the ValueString. We’ll quickly notice that, also unlike what’s documented, this particular ValueString consists of many more parts than just a data type and actual value. Let’s go through each one and then come up with our own kind of format used for this data type.

Right after the standard data type portion of the value type, we see an 18. This represents something I’ll call the sub-data type, or rather, the type of data for all the members in the collection. This particular sub-data type indicates that the collection is one that contains all string values.

After the sub-data type indicator, we can see a 1. While what this means is not obvious from the current setting we’re looking at, if you look at another setting with more than one user assigned, it quickly becomes apparent: this is an indicator of the number of items in our collection. For this particular setting, the 1 indicates that there is only one string to be found in the collection.

Next, we see a 36. What this represents is the length of the string encoded value. So, in our example, because the string encoded value is 36 characters, we have a 36 for the size indicator. It is important to note that this indicator indicates the length of the string encoded value, not necessarily the actual value. So, if we theoretically happened to have a collection comprised of 32-bit signed integer data, then the number 942 (were it to be a member of the collection) would yield a size indicator of 3.

Going onward, we’ll finally arrive at the first (and only) value for the collection. Perhaps confusingly at first, this value is comprised of multiple hyphens; this is why the length indicator is crucial. There are hyphens in the value because what this value is, is a GUID.

So, with all of that figured out, let’s define the format for ValueStrings using a collection data type:

1-<sub-data type>-<n>{[-<value length>-<string encoded value>](0) ... [-<value length>-<string encoded value>](n-1)}

…with n representing the number of items in the collection.

And there we have it. But not quite: we still don’t know what the heck the GUID values found in these collections even are.

Well, I’ll tell you what they are: each GUID value is the value of the objectGUID property of a user object in Active Directory. Thus, in order to ascertain the identity of a user from these settings, you would need to first bind to their user object using the objectGUID, and then go from there.

This is a sensible approach to storing a reference to a user’s identity; a user’s objectGUID property will never change. In fact, I wish more of the data stored in Exchange took this approach — too often the only reference to the user is the user’s display name.

Still, it will add a slight burden onto you in order to get any user out of it.

4.1 How Am I Supposed to Read This Crap?

Yes, yes…we understand now the structure of the dictionarystream that contains the resource settings, however we probably want to be able to somehow deserialize the raw XML data into some nice and friendly objects.

Well, given the nature of the schema, it would be foolish to expect, as far as .NET XML serialization is concerned, that deserializing this data would be something automatically supported. It has hopefully become more and more obvious to you from reading this article that these resource settings weren’t designed with an expectation that people outside Microsoft would be reading them directly from their points of storage.

One might wonder why it is stored in an XML dictionary at all. If it somehow makes the data more easily “shippable” when responding to Exchange Web Services requests, then that would not surprise me, but I have not examined that particular aspect, so I cannot say.

If you want to read this type of dictionary, there is no other way than to create a class which implements the IXmlSerializable interface. This is, of course, a very non-trivial thing to do. If you need some guidance as to how to do this, then you can refer to Microsoft’s own implementation, which is the internal ConfigurationDictionary class found in the Microsoft.Exchange.Data.Storage assembly.

I’d provide one for you, but frankly, that’s when I would start charging money.

4.2 Resource Delegates

Back in the section on address entry based settings, I mentioned that the resource delegates are not actually read from the address entry by the EMC, and is not the best place to retrieve that information. This is more of a general fact than one that is specific to resource mailboxes.

The reason why the address entry may not be complete in the information it offers, is because it is possible to set up a user so that they are hidden from all address lists. You can find this option on the General tab in the properties window of the user’s mailbox, as pictured below:

User is Hidden and Will Not Be Listed Publicly as a Delegate

If this check box is checked, then the user will not be publicly listed as a delegate, even though it is a delegate.

The best way to get all the delegates for a resource mailbox is by looking at the Access Control List for the resource mailbox’s calendar folder. I’m fairly convinced that this is what the EMC does when it displays the resource delegates for a particular mailbox, and it does this by referring to the calendar folder’s PR_NT_SECURITY_DESCRIPTOR MAPI property. If you are using Redemption, you can do this by accessing the ACL property exposed by the relevant RDOFolder object instance.

However, of course, other permissions which are intended for anything but delegate access may very well be defined in the ACL, so you will have to determine who’s a delegate and who isn’t on an entry by entry basis. I do not know if it is possible for an Exchange administrator to be able to change what the default delegate roles are, but if that is found to be not possible, then you can simply check whether or not the entry has an access mask value of 0x1208AB, which I believe is what gets doled out when someone is assigned delegate permissions (you will want to research that out, however).

4.3 Dictionary Keys for Resource Settings

One last bit of knowledge that may help is to know which name-value pairs in the dictionary represent which resource settings. For the majority of these settings, this is fairly obvious; however, I will provide a table for you because I’m a nice guy.

Resource Setting

Dictionary Key

Enable Resource Booking Attendant AutomateProcessing
Delete attachments DeleteAttachments
Delete comments DeleteComments
Delete subject DeleteSubject
Delete non-calendar items DeleteNonCalendarItems
Add organizer’s name to subject AddOrganizerToSubject
Remove private flag on accepted meetings RemovePrivateProperty
Send organizer info upon declining of request due to conflicts OrganizerInfo
Add additional text to response AddAdditionalResponse
Additional text AdditionalResponse
Mark pending requests as tentative TentativePendingApproval
Allow conflicts AllowConflicts
Allow recurring meetings AllowRecurringMeetings
Schedule only during working hours ScheduleOnlyDuringWorkHours
Enforce scheduling horizon (booking window) EnforceSchedulingHorizon
Booking window BookingWindowInDays
Maximum duration MaximumDurationInMinutes
Maximum number of conflicts MaximumConflictInstances
Allowed conflict percentage ConflictPercentageAllowed
Forward meeting requests to delegates ForwardRequestsToDelegates
In-policy requests from all users automatically approved AllBookInPolicy
List of users whose in-policy requests are automatically approved BookInPolicy
In-policy requests from all users can be approved AllRequestInPolicy
List of users whose in-policy requests can be approved RequestInPolicy
Out-of-policy requests from all users can be approved AllRequestOutOfPolicy
List of users whose out-of-policy requests can be approved RequestOutOfPolicy

Post a comment if you have any questions or anything interesting to say.


A reader recently sent me a message requesting further clarification on how one might use the generic IWeakEventListener implementation I recently wrote about (note to self: I need to disable the auto-comment disabling feature on this site).

The Scenario

Let’s say we have a child/parent pair of view models which serve as abstractions of a child/pair pair of views. The child view model mediates between a single item model and the child view, whereas the parent view model mediates between a collection-based item model and the parent view. Naturally, the parent view is going to have some type of ItemsControl living on it, which will, in turn, be generating one or more child view instances.

Now, let’s further assume that a requirement exists that compels us to subscribe to an event made available by the parent view model from somewhere on the child view model’s level. Thus, in this case, the parent view model is the publisher or source and the subscribing element exposed by the child view model is the subscriber or listener. For the purposes of this article, let’s say that it is an ICommand which is exposed by the child view model needs to subscribe to the parent view model’s event, for whatever reason.

Most of the time, in a normal .NET application, this kind of requirement isn’t a big deal. That’s because, unless we’re doing something wrong, our design should be such that the proper mechanism is in place (e.g. IDisposable, etc.) which will ensure that the child object will always be cognizant of when it is no longer needed and capable of responding to that.

The reason why the notion of weak event management suddenly becomes more prevalent or even essential when working with WPF is because of the very fact that WPF eschews various paradigms and practices followed religiously by other parts of the .NET Framework. One implication from this is the greater likelihood of a scenario occurring where our subscriber cannot and will not be made aware of when it should unregister from the events it has subscribed to.

To get around this problem, we make use of the weak event pattern. We already made a generic weak event listener in the previous article, so let’s show how to fully use it.

Using Weak Events

Our subscriber class (that which attaches to the published events and thus holds the event handlers that respond to them) is what will contain an instance of our generic weak event listener. If we follow the scenario shown above, we have a command on the child view model which needs to attach to an event. Let’s assume the parent view model implements an interface named INotifyAboutSomething which indicates its capability of publishing an event named SomethingHappened which, when fired, itself indicates a change in…something.

A Weak Event Manager

Before we can use an instance of our generic weak event listener in the definition of our command, however, we need to create a WeakEventManager for the particular event we want to handle. The event in question is the SomethingHappened event, so let’s define a class named SomethingHappenedEventManager (and just a note, none of the code, in the form that it is being made available, has been tested, although the code it originates from has been).

/// <summary>
/// Provides a <see cref="WeakEventManager"/> implementation so that you can use the weak event listener
/// pattern to attach listeners for the <see cref="INotifyAboutSomething.SomethingHappened"/> event.
/// </summary>
public sealed class SomethingHappenedEventManager : WeakEventManager
    /// <summary>
    /// Gets the currently initialized instance of this specific manager type; if one does not already exist,
    /// then this will create and register a new instance.
    /// </summary>
    private static SomethingHappenedEventManager CurrentManager
            Type managerType = typeof(SomethingHappenedEventManager);

            SomethingHappenedEventManager currentManager
                = (SomethingHappenedEventManager) GetCurrentManager(managerType);

            if (null == currentManager)
                currentManager = new SomethingHappenedEventManager();

                SetCurrentManager(managerType, currentManager);

            return currentManager;

    /// <summary>
    /// Adds the specified listener to the <see cref="INotifyAboutSomething.SomethingHappened"/> event of the
    /// specified source.
    /// </summary>
    /// <param name="source">The <see cref="INotifyAboutSomething"/> object with the event.</param>
    /// <param name="listener">The <see cref="IWeakEventListener"/> object to add as a listener.</param>
    public static void AddListener(
        [NotNull]INotifyAboutSomething source, [NotNull]IWeakEventListener listener)
        CurrentManager.ProtectedAddListener(source, listener);

    /// <summary>
    /// Removes the specified listener from the <see cref="INotifyAboutSomething.SomethingHappened"/> event
    /// of the specified source.
    /// </summary>
    /// <param name="source">The <see cref="INotifyAboutSomething"/> object with the event.</param>
    /// <param name="listener">The <see cref="IWeakEventListener"/> object to remove.</param>
    public static void RemoveListener(INotifyAboutSomething source, [NotNull]IWeakEventListener listener)
        CurrentManager.ProtectedRemoveListener(source, listener);

    /// <inheritdoc/>
    protected override void StartListening(object source)
        INotifyAboutSomething publisher = (INotifyAboutSomething) source;

        publisher.SomethingHappened += HandleSomethingHappened;

    /// <inheritdoc/>
    protected override void StopListening(object source)
        INotifyAboutSomething publisher = (INotifyAboutSomething) source;

        publisher.SomethingHappened -= HandleSomethingHappened;

    /// <summary>
    /// Handles a change in something.
    /// </summary>
    /// <param name="sender">The source of the event.</param>
    /// <param name="eventArgs">
    /// The <see cref="SomethingHappenedEventArgs"/> instance containing the event data.
    /// </param>
    private void HandleSomethingHappened(object sender, SomethingHappenedEventArgs eventArgs)
        DeliverEvent(sender, eventArgs);

You can create a generic weak event manager if you wish as well, however, for reasons I pointed out in my previous article on the subject, it may not be in your best interest to do so.

Using It All Together

With the weak event manager defined, here’s a skeleton of our command then, showing the usage of our generic weak event listener and the weak event manager we just defined.

/// <summary>
/// Provides a command that does something in response to something.
/// </summary>
public class TheCommand : ICommand
    private readonly WeakEventListener<SomethingHappenedEventArgs> _somethingHappenedListener;

    /// <summary>
    /// Initializes a new instance of the <see cref="TheCommand"/> class.
    /// </summary>
    /// <param name="publisher">
    /// The publisher of the event this command is responding to in some fashion.
    /// </param>
    public TheCommand([NotNull]INotifySomethingHappened publisher)
        _somethingHappenedListener = new WeakEventListener<SomethingHappenedEventArgs>(OnSomethingHappened);

        SomethingHappenedEventManager.AddListener(publisher, _somethingHappenedListener);

    /// <summary>
    /// Responds to something happening by...doing something else.
    /// </summary>
    /// <param name="sender">The source of the event.</param>
    /// <param name="eventArgs">
    /// The <see cref="SomethingHappenedEventArgs"/> instance containing the event data.
    /// </param>
    private void OnPoolChanged(object sender, SomethingHappenedEventArgs eventArgs)
        // ...

The magic above occurs when we make a call to the static SomethingHappenedEventManager.AddListener method, which is going to complete the registration of our weak event listener. Note that there is also a RemoveListener method, which, if possible, we can call in order to be as tidy as possible, but this is not necessary, we’re dealing with weak references here. Although, that wouldn’t excuse us from omitting a call to RemoveListener if a state does indeed exist in which we become aware that it is our time to go. Tidy, tidy!

The code above is obviously missing other parts required of a command, as that has nothing to do with this article (I’m just letting all you friendly copy-pasters out there know).


As a conclusion to the series of articles which started with a discussion on the various ways one can perform simple validations on method parameters using PostSharp, I thought I’d share my final thoughts on the matter as well as what I’ll be ultimately using my in own environment.

My primary concern with the original solution has had to do with both the signature of the actual base validation method as well as how we should pass the parameters we’re validating to it. Specifically, I wanted to make sure that the solution wasn’t introducing an excessive amount of boxing into code, which was a valid concern given the weak nature of the validation method’s signature (i.e. the fact that the validation method accepts the parameter as a System.Object type).

What We Have So Far

First of all, the reason why this is even important is because of the potential widespread usage of  the validation method — if this is how we’re checking if something is null or not, then in we can expect a very large presence of such a method in a typical non-trivial application.

The original abstract validation method had the following signature:

public abstract void Validate(object target, object value, string parameterName);

The second parameter (value) represents the value of the parameter we’re validating, and it is the focal point of the topic at hand.

Let’s then go over some quick points that concern the implications of using a validation method with such a signature:

  • Because the parameter type for the parameter’s value is System.Object, and not a generic type parameter, parameters which are value types will need to be boxed before being able to be provided to the method.
  • Furthermore, if the parameter we are validating is a generic type parameter, then if at run-time that generic type parameter is substituted for a value type, that too will need to be boxed.
  • While emitting a box instruction while a value type (or generic type parameter substituted with a value type) is on the stack incurs the traditional performance penalty associated with boxing operations, doing the same with a reference type (perhaps originally a generic type parameter substituted with a reference type) on the stack is essentially a nop.
  • Because we’re weaving in raw MSIL here, we’re responsible for supplying the boxing instructions; this won’t be taken care of for us since the compiler is out of the equation…so we need to cover all these different scenarios.

My Approach and Strategy

After some review, I’ve settled on an approach in regards to what the validation method signature should look like as well as the strategy involving how we get the parameter value we’re validating to the method.

First of all, an ideal solution would be to not depend on a weakly based validation method, but rather make use of a validation method that accepts generic type parameters. While this would be cool, it’s actually not the most simple of endeavors. The generic type parameter can be defined on different levels (e.g. type vs method), and how do we call the method if we have no generic type parameter (i.e. the parameter we’re validating is a standard, non-generic type in its original form)?

Those different scenarios require you to do different things, and although I was able to pass some, I lack the inside knowledge of the PostSharp SDK to be able to cover all of them.

In the grand scheme of things, however, the approach I ultimately ended up using has proved to be more than adequate for the job. Given the body of work I’ve been involved with and the types of entities requiring validation, I found that the benefits that would be gained from the ideal solution to be extremely negligible in substance; therefore, it is the most justifiable approach when my own time is weighed as a factor in the equation.

The signature for the validation method ended up staying the same as it was originally. The type I am using for the parameter value remains System.Object. My strategy in regards to how I’m invoking the method is the following:

  1. If the parameter being validated is explicitly a value type, then it (of course) gets boxed before being passed to the validation method.
  2. If the parameter happens to a generic type parameter, then we only box it if it contains no generic constraints which guarantee the type substituted at run-time to be a reference type (e.g. a specific reference type or the class keyword).
  3. Furthermore, (further down the inheritance chain) any validation attributes which perform types of validations that only make sense when being conducted on reference types (e.g. NotNull) have built-in compile-time validation to ensure we it’s not being applied to any explicit value types.

The vast majority of parameter validations occurring in my main body of work are done on generic type parameters. Many times (though not always) these type parameters are constrained to be reference types. In the end, boxing almost never occurs, therefore this is quite acceptable to me.

To implement this strategy, I simply added a property named RequiresBoxing to the EntityDescription type I talked about in a previous article, which I set during the initialization of the EntityDescription object like so:

  = entityType.IsValueType
    || (entityType.IsGenericParameter
        && !entityType.GetGenericParameterConstraints().Any(c => c.IsClass));

If, in the future, I ever found myself having to perform (let’s say) numerical-type validations on (obviously) value types, I’d probably create a separate class of validation advices which were specific in the value type being validated.


The Weak Event Pattern

Microsoft made a good move with .NET 4.0 when they introduced the concept of a weak event pattern as a technique meant to address a glaring WPF memory leak issue that had the tendency of arising under certain conditions. You can read all about it here, but that aside, it helps to actually understand the problem that merited the creation of these techniques.

Memory leaks arising from lingering event subscriptions is an often misunderstood issue. Remember, in a CLR environment, memory is managed by the CLR’s own automatic garbage collector. In this context, a memory leak refers to a nonoccurrence of the collection of one or more objects which otherwise should have been collected according to standard expectations. Thus, assuming that the CLR garbage collector is “perfect” (it is quite good), the notion of a memory leak occurring should be a foreign one. But it can happen, and one of the preconditions for that scenario has to do with event subscriptions.

It’s not as simple as the presence of lingering event subscriptions, however. Let’s go over some incredibly rudimentary terminology:

  • One object exposes events that other objects can subscribe to. This object is the publisher of the event.
  • The objects which attach handlers to the publisher are subscribers of the event.

Why It’s Needed

From the subscriber’s point of view, I should always be sure to detach all handlers previously attached to the publisher as soon as it is practical. In fact, this requirement makes me a good candidate for a purely-managed focused implementation of the IDisposable interface. However, if I, as the subscriber, shirk these responsibilities, we aren’t necessarily guaranteed a memory leak. It is only when the lifetime of the publisher exceeds that of the subscribers will we have ourselves a memory leak. This is more of a matter-of-fact observation than a deep deduction; because subscribers have handlers attached to the publisher, the publisher will maintain a reference to those subscribers, and thus those subscribers cannot be collected.

This becomes a problem, especially when the subscribers need to die. An example that can occur is when you have some type of collection view or view model that contains children views or view models. This collection publishes an event that its children subscribe to. Whenever a child gets removed from the collection, we would very much want that child to be collected, especially if it is something significant such as a graphical view. Unless that child’s handler is detached, however, this collection won’t occur.

Many times the objects involved will lack the level of intimacy with each other required for one to know that that it needs to tell the other that it needs to clean up its event handlers. One of the constructs Microsoft developed in order to account for situations where associated objects lacked understanding of the internal behavior and structure of each other was the IDisposable interface. Unfortunately, WPF makes little to no use of this approach, which I believe is both odd and a major cause of all of these problems. Therefore, implementation of this interface will do nothing for you and your WPF-specific objects.

How to Use the Pattern

Making use of the weak event pattern is a rather simple affair:

  1. Create a new manager class derived from WeakEventManager
  2. Create a new implementation of IWeakEventListener
  3. Wire up the IWeakEventListener appropriately

Although it is a simple few steps one must complete, it is a bit burdensome to have to create new classes for each event or group of events coming from a specific source which we want to listen to weakly. There is a heavy use of static methods involved with this pattern as well, further restricting our options in regards to inheritable functionality.

Why does the pattern require specific types defined for specific events? The answer has to do with performance. It is possible to create a generic WeakEventManager, but your performance will suffer. Indeed, with .NET 4.5, we will see a generic WeakEventManager introduced by Microsoft. However, with it comes the warning I just provided: use of the generic variant of WeakEventManager will result in decreased performance. The performance trade-off most likely has to do with reflection cost as well as expectations by internal WPF components and processes which may have been optimized around an expectation of discrete manager types.

A Generic IWeakEventListener

With IWeakEventListener implementations, however, the story is different. Here we can easily devise a generic variant which we can easily use without worry of performance implications.

Here is an example of a generic IWeakEventListener implementation:

/// <summary>
/// Provides a generic weak event listener which can be used to listen to events without a strong reference
/// being made to the subscriber.
/// </summary>
/// <typeparam name="TEventArgs">
/// The type of <see cref="EventArgs"/> accepted by the event handler.
/// </typeparam>
public class WeakEventListener<TEventArgs> : IWeakEventListener
    where TEventArgs : EventArgs
    private readonly EventHandler<TEventArgs> _handler;

    /// <summary>
    /// Initializes a new instance of the <see cref="WeakEventListener{TEventArgs}"/> class.
    /// </summary>
    /// <param name="handler">The handler for the event.</param>
    public WeakEventListener([NotNull]EventHandler<TEventArgs> handler)
        _handler = handler;

    bool IWeakEventListener.ReceiveWeakEvent(Type managerType, object sender, EventArgs e)
        TEventArgs eventArgs = e as TEventArgs;

        if (null == eventArgs)
            return false;

        _handler(sender, eventArgs);

        return true;

Using this generic variant is quite simple: add it as a member to a subscriber class, and make a call to an appropriate WeakEventManager.AddListener method in order to register the weak event listener from the subscriber during its initialization.


Recently I’ve been involved with creating a general purpose library (to be used at my company) that includes, among other things, a nice API for creating and manipulating Task Dialogs from .NET-land. As with everything unmanaged, implementing it not only yields the benefit of being able to use it, but you tend to learn some interesting things in the process.

Unable to find an entry point named ‘TaskDialogIndirect’…

It won’t take long until you run into this lovely issue. Shouldn’t be much of a surprise to anyone familiar with such matters, but, in case you didn’t know, the TaskDialog API is only available on version 6.0 and up of the Common Controls library (on post-XP of course…I think there’s also a v6 on XP, but it is different version nonetheless). If you’re running Windows Vista or 7, then you have this library installed by default, however it is the older version 5.8 of the Common Controls library that will be loaded by default.

We have multiple versions of comctl32.dll because of that libraries participation in the side-by-side assembly system, which was Microsoft’s answer to DLL Hell and somewhat of a precursor to .NET’s Global Assembly Cache. If we intend on having our .NET application load the correct version of comctl32.dll, then we’re going to have to participate in that system, which then requires us to provide a manifest dictating the version we need.

Providing a manifest is simple if your end-product is a .NET executable: you simply add an application manifest which will isolate the application and load the appropriate DLL versions at run-time; however, it is not as straight forward if you’re writing a .NET library, since simply embedding a manifest into the DLL exerts no influence at run-time on an executable referencing it. Specifically, the type of manifest we are talking about here is an application manifest (as opposed to an assembly manifest). Visual Studio offers support in the C# project properties designer for embedding application manifests into application projects, but not libraries.

Activation Contexts

Requiring that all applications referencing your library also include their own correctly-made application manifest if they want a specific subset of features provided by your library to work is an exceptionally unrealistic requirement. If we cannot automatically affect the run-time so that the proper unmanaged DLL’s are targeted, then we are providing functionality that is not guaranteed to work. As such, such functionality cannot exist in any proper library, and would have to be removed. Luckily, we do have the power to influence the run-time from the confines of a .NET DLL, and the way we do it is by making use of activation contexts.

By activating an activation context, we are essentially telling Windows to redirect the running application to a particular DLL version according to a manifest of our choice. In fact, activation contexts are the very engines that power the side-by-side assembly system. When an application needs to make a call to a library or create a process or other resource, Windows checks that application for the presence of an application manifest, using the information found in that manifest in order to populate an activation context to use to guide that call to the correct destination.

Normally, activation contexts are managed entirely by the Windows system; however, in our case, we’re going to be rudely intruding into that system so that we can perform some actions not taken by the Windows system. Specifically, prior to our P/Invoke call to TaskDialogIndirect, we’re going to create and activate an activation context that will redirect that call to the proper version of comctl32.dll. There is official precedence for this activity: the Windows Forms library does exactly what I’ve just described when you make a call to Application.EnableVisualStyles.

Microsoft provides some documentation on how we can do that from a managed environment in the form of a KB article. I’m not going to provide a complete walkthrough on the process here, as the KB article covers most of it, but I do want to address one of the limitations of the approach offered by that article. In particular, I’m referring to how the approach offered by the KB article requires that an actual manifest file be present on disk. Relying on an external non-binary file for something that shouldn’t be configurable by an end-user anyway is clunky and not desirable.

Luckily, we can do better than that. Instead of using a physical manifest file as a source for the activation context, we can create an activation context using a resource embedded in our DLL instead. And we can do all of this simply by configuring the ACTCTX structure populated during the process differently.

Activation Context Creation Using a PE Image Source

The application manifest can typically be found in the .rsrc section of a PE image, where it exists as a resource like any other. With Visual Studio, you can add an application manifest to your project (let’s call ours Library.manifest), and then enable the embedding of that manifest into project’ s output through an option located in the project properties designer. However, no such option exists for DLL projects, but this doesn’t matter since we can get Visual Studio to do what we want anyways. Open up your *.csproj file with the XML editor, and add the following to the first <PropertyGroup> in the file:


This will result in your manifest file being embedded into the DLL compiled from your project. You will see that the manifest is embedded, not as a .NET resource, but as a true native resource of the RT_MANIFEST type. The resource ID of the manifest should be 2. This is the standard resource ID for all manifests found in DLLs. In fact, whenever a native DLL containing an embedded manifest resource with an ID of 2 is dynamically loaded at run-time, the operating system loader automatically creates an activation context using that manifest. It does this so it can then proceed to load all dependencies of that DLL without issue.

This obviously is not going to happen for our DLL, however, since this our DLL is a managed DLL, of course, and it is loaded a bit differently. Regardless of this, we still need our manifest embedded in our DLL so that it can be sourced appropriately by the activation context we are going to be creating.

Following this, we need to change some of the code you may have picked up from that KB article. Specifically, we need to populate the ACTCTX structure that gets provided to CreateActCtx a bit differently.

  1. The KB article sets the lpAssemblyDirectory to the directory containing the current assembly. Although the KB article is throwing terms like “security hole” at us in the nearby code comments, we’re actually going to remove this assignment, and leave lpAssemblyDirectory unset. I don’t believe this is documented anywhere, but in actuality, I believe the Activation Context API ignores lpAssemblyDirectory when loading the manifest in the way we are going to be doing it.
  2. Next, the KB article has us setting the dwFlags field to ACTCTX_FLAG_ASSEMBLY_DIRECTORY_VALID. We actually want to set it to ACTCTX_FLAG_RESOURCE_NAME_VALID instead (which is 8, by the way).
  3. The provided example sets the lpSource field to the path of the physical manifest file. Since we don’t have one of those, and because our manifest file is embedded in our DLL, we actually want to set lpSource to the path to our DLL file.
  4. Finally, we need to tell Windows what the resource ID of our manifest is, and we provide that information by setting the lpResourceName member using MAKEINTRESOURCE.

The last step in the steps listed above requires us to set lpResourceName using a value that we derive from MAKEINTRESOURCE. While that’s not a very tall order when we’re developing in C++, how do we do this from a managed, C# environment? The simplest way is to actually change the return type for this field in our ACTCTX structure definition.

Looking at the KB article, the sample ACTCTX structure they provide looks like the following:

private struct ACTCTX
   public int       cbSize;
   public uint      dwFlags;
   public string    lpSource;
   public ushort    wProcessorArchitecture;
   public ushort    wLangId;
   public string    lpAssemblyDirectory;
   public string    lpResourceName;
   public string    lpApplicationName;

Change the return type of lpResourceName to an IntPtr (!!); yes, an IntPtr, so that we have the following:

   public IntPtr    lpResourceName;

And then, remembering that the ID for our manifest resource is 2, we can then populate our structure like so (substituting “ClassName” for the name of the class where this is being done of course):

ACTCTX context = new ACTCTX
                         cbSize = Marshal.SizeOf(typeof (ACTCTX)),
                         dwFlags = ACTCTX_FLAG_RESOURCE_NAME_VALID,
                         lpSource = typeof(ClassName).Assembly.Location,
                         lpResourceName = (IntPtr) 0x2

Providing this structure to CreateActCtx will then create a new activation context based on our embedded manifest. Well almost. Did you happen to create your application manifest using Visual Studio’s application manifest file template?

Fix Your Application Manifest

I found CreateActCtx to be extremely touchy when it comes to the actual form of the application manifest itself, and that the manifest generated by Visual Studio was utterly incompatible with it. Attempted to create a new activation context using a manifest like that would result in CreateActCtx returning an error code.

The manifest generated by Visual Studio contains a ton of XML namespace attributes which may or may not make CreateActCtx puke. I say “may or may not” because I’m not sure if it was these namespaces or another part of the stock manifest content that caused it to fail. But, that doesn’t really matter. Here’s a cleaned up manifest file that is guaranteed to work for you:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
    Your manifest
  <assemblyIdentity version=""
  <dependency optional="yes">

You can add other sections, such as a <compatibility> section if you’d like, and they should work fine. Also, the <description> element is not required for this to work (hell, I’m not even sure if that’s a standard manifest element…all I know is that I’ve seen it in a number of in-use manifests originating from Microsoft). After you do all of this, it should start to work with CreateActCtx.

But hey…this is where the fun begins. You shouldn’t get the false impression from all of this that you can now go willy-nilly and be able to act foolishly without suffering consequences. I heavily tested my code and came across a few “edge” cases that you need to be very careful about, as you can easily cause issues when acting within one of these self-made activation contexts.

Who Doesn’t Love SEHExceptions?!

You know that when you’re getting structured exception handling errors in managed code, that you’re doing something very special. Now, before I go on, let me state that I tested my use of activation contexts very heavily and found them to be very stable. However, a factor allowing me to come to that conclusion is the fact that the code executing within the activation context is very stable. If your code is anything less than that, or if it is operating in an extreme environment, you probably want to exercise caution.

Through my testing, I did identify an issue that I do not altogether understand, due to the fact that the problem was occurring deep in unmanaged land, and I couldn’t come across much material relevant to my issue.

The problem I encountered was an interesting one. As we know, we’re dealing with Task Dialogs here. While the Task Dialog is open, the thread that opened it will be blocked until it is closed. Well, not entirely. While blocked, however, the same thread is going to handle any callbacks fired by the Task Dialog. Because we require an activation context to open the Task Dialog, the call to open the Task Dialog is done within a using block for the disposable class which handles the activation context creation and activation. When we hit the finally block of that using block, it’s going to make a call to DeactiveActCtx.

I found that if I threw an exception while handling a callback from the Task Dialog, that an SEHException would get thrown by DeactiveActCtx during the disposal of the class that created the activation context. The activation context would essentially become impossible to deactivate, indicating perhaps that somehow the activation context stack was corrupt. The error code for the SEHException was 0×80004005, or: External component has thrown an exception. Throwing an exception within the activation context, but not during the handling of a callback, would cause no problems when deactivating the context.

So…if anyone else has this issue, then I would advise to make sure the dialog is closed first before throwing the exception. My Task Dialog class has a Closed event, so I would simply schedule the exception to be thrown in an event handler for that, and then proceed to close the dialog from the callback. The context would deactivate with no issue, and the exception could then get thrown and thrown in the developer’s face.


With the release of .NET 4.0, Microsoft made some large scale changes to the framework’s security model. Concluding that the legacy model was little understood by both developers and IT administrators, Microsoft decided to do what is normally a very prudent action: they decided to simplify it.

In previous versions of the .NET Framework, the security model was tangible in the form of Code Access Security policies. Each policy was a set of expressions which used information associated with assemblies in order to determine which code group said assemblies belonged to. Each code group would contain a permission set, which would then be referred to by code access demands made in response to attempts to perform privileged actions.

.NET 4.0 got rid of all that nonsense with the introduction of level 2 transparent code (although technically, CAS still exists; what’s been eliminated is CAS policy). Basically, under this system, machine-wide security policy is off by default, and all desktop applications run as full trust. Code is either transparent, safe-critical, or critical; assemblies lacking annotation will have all their types and members treated as being critical if the assembly is fully trusted, or transparent if the assembly is partially trusted only.

Instead of worrying about having to make a bunch of strange security-related assertions and demands, all one needs to do to run under the new model is annotate their code appropriate using the three main attributes: SecurityTransparent, SecuritySafeCritical, and SecurityCritical. Sounds good; simple is better, but don’t use this new system. It isn’t ready for the real-world yet.

This article is not meant to actually criticize the substance of the new security model. I think the model is a huge improvement, but there’s a a few specific issues that cause me to view it as a mess as it now stands. Before we get into one of those issues, let’s look at reality first.

Most of the .NET Framework Doesn’t Use It

One of the ways I gauge the viability of new technologies from Microsoft is by trying to get a handle on whether or not Microsoft itself it. This approach has never failed me, and has saved me from a countless number of time sinks that would have affected endeavors both professional and personal. So, in order to check out the immediate viability of the new level 2 transparency model, let’s look at the primary .NET Framework assemblies and see whether or not they use it.

We can tell whether or not a particular assembly is using level 2 transparency by taking a look at the assembly’s metadata. If the assembly is operating under the level 2 transparency model, it would contain a SecurityRulesAttribute with a SecurityRuleSet.Level2 value being passed to that. However, it is important to remember that level 2 transparency is used by default, so if the attribute is not declared, then we should assume it to be used level 2. It is against “guidelines”, however, not to declare this attribute.

If the assembly is operating under the level 1 transparency model, the SecurityRulesAttribute is declared with a SecurityRuleSet.Level1 value passed to it instead.

Let’s then see what we come up with from looking at some of the core .NET Framework assemblies. In order to do this, I wrote a program which enumerated over every single assembly installed to the standard .NET Framework 4.0 installation directory, checking the SecurityRulesAttribute attribute present on each one.

The results are interesting:

  • Total Level 2
    77 assemblies
  • Total Level 2 with Obsolete Security Actions
    (this means the assembly included a SecurityPermissionAttribute with a SecurityAction value deemed obsolete under the new model)
    45 assemblies
  • Total Level 2 Lacking Assembly-wide Notation
    (this means that no  SecurityTransparentAttribute, SecurityCriticalAttribute, or AllowPartiallyTrustedCallersAttribute was found on the assembly metadata)
    53 assemblies
  • Total Level 1 
    55 assemblies

The majority of Level 2 assemblies were completely lacking notation (i.e. no SecurityRulesAttribute), which goes against Microsoft’s own guidelines. As we can see, however, there are a number of level 2′s, however the majority of the level 2′s are insignificant (except for mscorlib.dll and System.Security.dll), whereas the important .NET assemblies are what constitute the Level 1 group.

Here’s the list of Level 1 assemblies:

  • System.AddIn
  • System.Configuration
  • System.Core
  • System.Data.DataSetExtensions
  • System.Data
  • System.Data.Entity.Design
  • System.Data.Entity
  • System.Data.Linq
  • System.Data.OracleClient
  • System.Data.Services.Client
  • System.Data.Services
  • System.Data.SqlXml
  • System.Deployment
  • System.DirectoryServices.AccountManagement
  • System.DirectoryServices
  • System.DirectoryServices.Protocols
  • System
  • System.Drawing
  • System.EnterpriseServices
  • System.IdentityModel
  • System.Net
  • System.Runtime.Serialization
  • System.ServiceModel.Activation
  • System.ServiceModel
  • System.ServiceModel.Web
  • System.Transactions
  • System.Web.ApplicationServices
  • System.Web
  • System.Web.Entity
  • System.Web.Mobile
  • System.Web.Services
  • System.Windows.Forms.DataVisualization
  • System.Windows.Forms
  • System.WorkflowServices
  • System.Xml
  • System.Xml.Linq
  • PresentationCore
  • PresentationFramework.Aero
  • PresentationFramework.Classic
  • PresentationFramework
  • PresentationFramework.Luna
  • PresentationFramework.Royale
  • PresentationUI
  • System.Printing
  • System.Windows.Presentation
  • UIAutomationProvider
  • UIAutomationTypes
  • WindowsBase

These are some of the most important assemblies in the BCL, and they’re all using the legacy security model.

But you know, all of this is just an observation, it doesn’t mean anything. Indeed, I was just recounting reality right now. What I just talked about isn’t the primary reason not to use it. What caused my jaw to drop was when I found out about the issues Visual Studio’s code coverage had with it.

Visual Studio Code Coverage Doesn’t Work With It

…if you’re assembly is annotated, at least. Deal breaker for me.

By “annotated”, I mean your assembly is set to either be SecurityCritical, SecurityTransparent, or AllowPartiallyTrustedCallers

If you do any of those, and then annotate the rest of your code properly using FxCop and SecAnnotate, you will get the following error if you run any unit tests with Visual Studio’s built-in code coverage:

 System.TypeInitializationException: The type initializer for ‘xxx’ threw an exception. —>
System.MethodAccessException: Attempt by security transparent method ‘Microsoft.VisualStudio.Coverage.Init_[...].Register()’ to call native code through method ‘Microsoft.VisualStudio.Coverage.Init_[...].VSCoverRegisterAssembly(UInt32[], System.String)’ failed.
Methods must be security critical or security safe-critical to call native code.

The reason why this happens is because during instrumentation a bunch of unattributed methods get inserted that make P/Invoke calls. I find it a little ridiculous that Microsoft’s own tools don’t support this new model, and especially one that I feel is rather critical to the development process. Microsoft clearly does not use its own tools for code coverage analysis, at least with the assemblies that are level 2.

So, my advice if you are starting a new library or whatnot: If security is a priority with you (like if you want to allow partially trusted callers), then use the legacy model. If it isn’t, then you can declare level 2 compatibility in your assembly metadata, but don’t add any other level-2 specific assembly-wide attribute until the model is more supported.


[ExcludeFromCodeCoverage] — Oh, the Pain!

If you make use of Visual Studio’s built-in code coverage feature, you are probably aware of the existence of the ExcludeFromCodeCoverage attribute. This attribute, when decorating a piece of code, will exclude that code from all code coverage instrumentation. The end effect of its use is the total exclusion of the method or type from the code coverage report we all are treated with at the conclusion of the testing session.

While it does its job, its use is a bit intrusive, especially since all we are doing is appeasing a specific testing tool. It would be nice if there was some sort of coverage configuration file one could edit in order to exclude specific code elements, but no such thing exists in Visual Studio’s code coverage system. Perhaps the visibility of such things is a benefit, because it makes the fact that the code isn’t being counted in the coverage metric blatantly obvious. But it’s only obvious if you are actually looking at that code and not just a coverage report along with some top-level project files. Plus, the assumption is, is that you are a competent individual who wouldn’t be itching to exclude something from coverage analysis unless you had a good reason to do so.

With all that said, it’s my opinion the process of realizing code coverage exclusion through the use of the ExcludeFromCodeCoverage attribute is clumsy and not ideal. This gets further exacerbated when we want to go beyond the type-level and exclude an entire namespace from code coverage. That isn’t an option with the ExcludeFromCodeCoverage attribute; you are limited to types, methods, properties, indexers, and events.

Well there is a better way. Using PostSharp, we can devise our own scheme of coverage exclusion by creating a very simple aspect that will allow us to target entire namespaces if we wish, as well as smaller items in scope, all without having to touch any of the code being excluded.

The DisableCoverage Aspect

The aspect we’ll be designing here will act as a vehicle for the efficient delivery of our adored ExcludeFromCodeCoverage attribute to the intended pieces of code we wish to exclude. We will then make use of PostSharp’s multicasting feature in order to steer this vehicle.

The code for the aspect can be found, and, as you can see, it’s extremely simple:

[MulticastAttributeUsage(MulticastTargets.Class | MulticastTargets.Struct)]
public sealed class DisableCoverageAttribute : TypeLevelAspect, IAspectProvider
    public IEnumerable<AspectInstance> ProvideAspects(object targetElement)
        Type disabledType = (Type) targetElement;

        CustomAttributeIntroductionAspect introducedExclusion
            = new CustomAttributeIntroductionAspect(
                new ObjectConstruction(typeof (ExcludeFromCodeCoverageAttribute)));

        return new[] {new AspectInstance(disabledType, introducedExclusion)};

All this aspect does is introduce the ExcludeFromCodeCoverage attribute into all applied targets. I have the AttributeUsage of this attribute set to target only assemblies because using this aspect in any other way defeats its entire purpose. The MulticastAttributeUsage of this attribute, on the other hand, is targeting classes and structs. The reason I do this is so that the ExcludeFromCodeCoverage attributes we’re introducing are done so at the type-level, which is the most granular level of scope I wish to specify when using this aspect.

Let’s say you have a namespace named Bad.Tomato containing a bunch of classes you wish to exclude from coverage. To use our aspect in order to realize this wish, you would open up your project’s AssemblyInfo.cs file and add the following assembly-level attribute declaration:

[assembly: DisableCoverage(AttributeTargetTypes="Bad.Tomato.*")]

This will then result in the exclusion of all code elements belonging to the Bad.Tomato namespace from coverage analysis.

Maybe targeting a whole namespace is a bit too drastic for your tastes…maybe we only want to exclude a single class named Bad.Tomato.Farmer. Ok…then:

[assembly: DisableCoverage(AttributeTargetTypes="Bad.Tomato.Farmer")]

Hooray, no more ExcludeFromCodeCoverage! If you want to get even more granular, you can, and I believe you can make use of regular expressions here as well.


Note: The following article is more an exploratory piece than anything; using any of the code or the approaches discussed in a production setting is not recommended. This article is meant to be as part of a build-up for an ultimate conclusion on the matter.

The Problem

In my previous article on the topic, I got into how we can extend PostSharp’s weaving capabilities in order to add support for special parameter validation aspects.

If you happened to examine any assemblies built using this technology with a static analysis tool like NDepend, you might have had alarms going off complaining about an excessive amount of boxing occurring. This is never a good thing, so let’s dissect this a bit.

The sole concrete validation attribute provided in the example was the NotNullAttribute. The base definition of the validation method employed by this attribute and all other validation attributes looked like the following:

        public abstract void Validate(object target, object value, string parameterName);

So, as we can see, all parameter types must be upcasted to the object type prior to being passed to the above method. Consequently, this also implies that all value type parameter values be boxed prior to being passed to Validate as well.

However, our sole concrete validation attribute (NotNullAttribute) is something we would never decorate a parameter known to be a value type anyway, since value types cannot be null! To do so would be silly, and (if you implemented the compile-time validation routine correctly), would result in a compile-time error.

But even if it is true that no value type parameters are decorated with a validation attribute, you may still have excessive boxing occurring in your assembly. This is because in the example provided in the previous article, box instructions are issued for both value type and generic type parameters.

So, if you’re like me, in that a large number of your methods use generic type parameters and you happen to be validating those parameters, you will have excessive instances of the box instruction in your code. So why does our previous example box generic type parameters? Simply because it takes the lazy way out: those generic type parameters may be bound to value types, they may not, so we box just in case they are.

What happens when you box a generic type parameter that is bound to a value type? Well, it boxes it like normal. What happens when you box a generic type parameter that is bound to a reference type? Nothing, it is essentially a nop. So, in actuality, because boxing will only actually be occurring when we’re binding the generic type parameters to value types, this may not seem like a very big deal to you anymore.

However, we can do better, and avoid boxing altogether for the most part. Among other things, the boxing metric from the analysis tool is rendered useless if we don’t clean up the number of box instructions in the IL. I’m going to go over how to go about doing just this; be aware, though, that my solution is not 100% complete, and I’m still trying to figure out how to support a few “corner” (yeah…not exactly) cases.

The Solution

Let’s take a look at how we can solve this problem.

A Generic Validation Method

One of the simplest ways to address this problem is to have a validation method that will accept a generic type parameter. So…something like the following:

public void GenericValidate<T>(object target, T value, string parameterName)

We then need to modify our actual weaving code so that it calls this method. Calling generic methods using IL is much more involved than it is when using C#, as the C# compiler handles all the required nasty business for us. All of my weaving code is in a class named ValidationInstructionWriter. One of its responsibilities is emitting calls to the parameter’s validation method, and it is this code that we need to change.

Before we go into those changes, however, let’s go over the general strategy at play here. Or, to put it another way, let’s look at the strategy that arises from some limitations encountered when trying to correct this problem.

Limitations and Things to Do

First of all, I’m not entirely replacing my original validation method with this new generic one. Only when we have a generic type parameter will we be calling the generic validation method. If, instead, we have a non-generic value or reference type, we will actually be calling the original method. Obviously, in the latter case, no box instruction is emitted when the parameter is known to be a reference type, however one is emitted if it is known to be a value type obviously.

Why? Well, the reason is very simple: I don’t know how to call generic methods using the PostSharp SDK unless I already have a generic type parameter. Indeed, this is the very reason as to why I originally only had a the validation method accept object types in the previous article on this topic. I certainly have attempted to figure out how to do this, however I have been unsuccessful so far. It certainly isn’t trivial, and isn’t expected to be, even though it would seem to be fairly standard thing at first (welcome to MSIL, by the way). If anyone out there does know how to do this, I’d love to know.

The PostSharp SDK is an unsupported tool, so I refrain from bothering the PostSharp developers with questions related to it. I’m also going to refrain sharing the problems I encounter when doing this here as well, although I will if there’s some interest. Regardless, I will continue to attempt to figure out how to do it, and I will be sure to let the world know once I figure it out.

In order to insert some balance into this approach, our validation attributes are designed so that the validation method that actually gets overridden in all concrete validation attribute types is the original validation method and not the new generic method. Having to override two different methods would be a bit ridiculous, and would kill this whole thing for me. The generic validation method, thus, is not virtual or abstract; all it does is simply do a direct call to the original validation method.

What’s the point of all of this then? Well, it gets rid of all those extraneous box instructions that would have been littering our code, and it also defers the details of the generic type parameter conversion to the C# compiler. It is also one step closer to the totally correct solution, and it gets the discussion on this topic rolling. Besides, if you remember, if a generic type parameter is bound to a reference type, a box instruction causes no performance hit. However, if it is bound to a value type, then we will get the penalty that comes with boxing. Indeed, that very case is the only one that is negatively impacted by this decision.

Frankly, it’s hardly a setback to have to work with object types instead of unconstrained generic types anyways, which offer no additional capabilities over object typed values. Taking it all into account, this compromise is acceptable under the standards I have for my projects. Frankly, even the original solution is, given of the immense benefits I’ve been discovering from being able to validate parameters in this fashion. Regardless, I’m not completely pleased with the final solution.

Let’s go over some constructs you’re going to see in the code. I’m not going to completely go over everything, as some things were described in the previous article, and other things you’ll need to figure out for yourself.


I have a custom type I use named EntityDescription. This object basically consists of the  ParameterDeclaration‘s ParameterType, system Type, and name. Assuming you have a field set to the ParameterDeclaration of the parameter we’re validating, it is initialized like so:

EntityDescription description = new EntityDescription(
  _parameter.ParameterType.GetSystemType(_genericTypeArguments, _genericMethodArguments),

This is basically just used as a container for commonly needed entity information, and is used outside of the specific scenario we’re covering today.


Another custom type I use if the ValidationCallParameters type. This is a type specifically purposed for containing the data required for emitting a call to one of our validation methods. I use this type for emitting both parameter and method validation calls, however this article (and the previous article) only concerns parameter validation. Assuming the index of the parameter you are validating is stored in a variable named targetIndex, you would initialize an instance of this type like so:

ValidationCallParameters parameters = new ValidationCallParameters(
        ? Validator.GenericValidationMethod
        : Validator.ValidationMethod);

Note that, for the validation method, I tell it use the GenericValidationMethod only if the parameter type is generic.

Let’s go over the actual code that emits a call to our new validation method.


/// <summary>
/// Emits a call to a method responsible for validating a described entity, using the provided parameters
/// to arm it.
/// </summary>
/// <param name="parameters">The set of data required in order to weave this validation call.</param>
/// <param name="description">A full description of the entity being validated.</param>
public void EmitValidationCall(ValidationCallParameters parameters, EntityDescription description)
    // This adds a new instruction sequence to the current block.

    IMethod validationMethod = FindValidationMethod(parameters, description);

    Writer.EmitInstructionField(OpCodeNumber.Ldsfld, _validatorField);
                                ? OpCodeNumber.Ldnull
                                : Context.Method.GetFirstParameterCode());

    Writer.EmitInstructionInt16(parameters.TargetOpCode, (short)parameters.TargetIndex);

    if (description.EntityType.IsValueType)
        Writer.EmitInstructionType(OpCodeNumber.Box, description.EntityTypeSignature);

    Writer.EmitInstructionString(OpCodeNumber.Ldstr, description.EntityName);


The CreateInstructionSequence is a simple routine that behaves in the manner described by the comments above its call. The GetFirstParameterCode extension method is described in my previous article.

As we can see here, we do emit an explicit box instruction if the type of a known value type. If it is not a known value type, however, we won’t box. There really isn’t much new in here from the previous article, except for the call being made to FindValidationMethod, which is a new method introduced to solve this problem. Let’s take a look at that method.


/// <summary>
/// Locates the <see cref="IMethod"/> instance for our validation method.
/// </summary>
/// <param name="parameters">The set of data required for the validation call.</param>
/// <param name="description">Optional. The description of the entity we're validating.</param>
/// <returns>The <see cref="IMethod"/> instance for our validation method.</returns>
private IMethod FindValidationMethod(ValidationCallParameters parameters, EntityDescription description=null)
  if (null == description)
      return Context.Method.Module.FindMethod(parameters.ValidationMethod, BindingOptions.Default);

  if (description.EntityType.IsGenericParameter)
      MethodInfo validationInfo = (MethodInfo) parameters.ValidationMethod;

      MethodInfo genericValidationInfo =

           genericValidationInfo, BindingOptions.RequireGenericInstance & BindingOptions.RequireGenericMask);

  return Context.Method.Module.FindMethod(parameters.ValidationMethod, BindingOptions.Default);

This method has an optional parameter for purposes outside of what we’re covering.

So, as we can see here, if we have a generic parameter, we need to call our method in a special way. You may want to add a check in there that ensures ValidationMethod is a generic method before attempting to get a generic method definition from it (which will throw an exception if it isn’t). This is taken care of in our original initialization of the ValidationCallParameters object.

There you go, this is a step on the path to making our validation solution use generics.

(Aphex Twin’s Analord series is some great stuff, by the way).


Immutable Properties

Microsoft likes to say that an auto-implemented property declared with a private set accessor is immutable. This is not entirely true, in a number of ways. One can always use reflection to assign values to a property, regardless of whether the set accessor is private or public. But putting conspiracy theory cases like that to the side, these properties still cannot be considered immutable due to the fact that, although the outside world cannot set their values, they are settable from anywhere inside the class.

The only type of property I would consider immutable is one that returns a read-only field and lacks a set accessor entirely. Of course, the very field that gets returned, provided it is not a value type, may very well not be immutable itself, but I digress.

I found an interesting way to assign a value to a property declared with a private set accessor that doesn’t use reflection. It does, however require some participation from the class itself. I’m not recommending actually using the approach I’m going to go over in any real-world code, as most of the time a read-only property is read-only for a good reason. Having to skirt around accessibility levels should never be the only way to proceed with an implementation, and if it is, then there is a problem with your design or approach as a whole. But, if you know what you’re doing, this may be of interest to you.

But putting all that sensible stuff aside, I’ve found an interesting one can go about setting the value of a property with a private set accessor using Expressions. More specifically, we can take an expression of a property’s get accessor being accessed and turn it into one where a value is being assigned to the property’s set accessor. This could be useful for instances where you need to initialize some sort of domain object that features limited accessibility in regards to setting the values of its properties.

An Example: PropertyMap

I developed a PropertyMap type which allows you to define values for a type of object’s properties outside of and independent of any instance of that type. The PropertyMap is populated with values that should be set to different properties of the target type. The properties are specified through the use of an ExpressionReader class that features a ReadName method that returns the name of a member being accessed. There are plenty of examples online as to how to use Expressions in order to get the names of code elements, so I’ll defer you to those examples if you wish to learn more about this.

As a side note, before we get to the code, just note I had to condense the names of the type parameters in order for the code to fit on the page the best. Typically, they’re a bit more descriptive.

Here’s the code for the Add method:

/// <summary>
/// Adds a mapping entry for the property being expressed.
/// </summary>
/// <typeparam name="TValue">The type of value returned by the property.</typeparam>
/// <param name="propertyExpression">An expression accessing a property's getter method.</param>
/// <param name="value">
/// The value to be assigned to the property during a future mapping operation.
/// </param>
public void Add<TValue>([NotNull]Expression<Func<T,TValue>> propertyExpression, TValue value)
    string propertyName = ExpressionReader.ReadName(propertyExpression);

    if (!ContainsKey(propertyName))
        Add(propertyName, value);

Alright, nothing too crazy here.

You can then pass the PropertyMap to an instance of the targeted type, which the instance can then use to assign values to its properties via successive calls to a Map function, each time providing an expression that accesses a different property.

/// <summary>
/// Maps a preconfigured value to the property being expressed.
/// </summary>
/// <typeparam name="TValue">The type of value returned by the property.</typeparam>
/// <param name="propertyExpression">
/// An expression accessing the getter method of the property to map to.
/// </param>
public void Map<TValue>([NotNull]Expression<Func<TValue>> propertyExpression)
    string propertyName = ExpressionReader.ReadName(propertyExpression);

    // Validation occurs here...Excluding this from the post.

    MemberExpression propertyGetterExpression
        = (MemberExpression) propertyExpression.Body;
    ConstantExpression instanceExpression
        = (ConstantExpression) propertyGetterExpression.Expression;

    T instance = (T) instanceExpression.Value;
    Action<T, TValue> propertySetter = CompileSetter<TValue>(propertyGetterExpression);

    propertySetter(instance, (TValue) this[propertyName]);

The above extracts the property getter expression and then calls CompileSetter to create an Action that invokes the set accessor of the type.

/// <summary>
/// Compiles a property setter method derived from the provided property getter expression.
/// </summary>
/// <typeparam name="TValue">The type of value returned by the property.</typeparam>
/// <param name="propertyGetterExpression">
/// An expression accessing a property's getter method.
/// </param>
/// <returns>
/// A compiled <see cref="Action{T,TValue}"/> which will set the value of the property
/// accessed by
/// <c>propertyGetterExpression</c> to what's provided.
/// </returns>
private static Action<T, TValue> CompileSetter<TValue>(Expression propertyGetterExpression)
    ParameterExpression valueParameterExpression
        = Expression.Parameter(typeof (TValue), "value");
    ParameterExpression instanceParameterExpression
        = Expression.Parameter(typeof (T), "x");

    BinaryExpression assignValueExpression
        = Expression.Assign(propertyGetterExpression, valueParameter);

    Expression<Action<T, TValue>> propertySetterExpression
        = Expression.Lambda<Action<T, TValue>>(assignValueExpression,
    return propertySetterExpression.Compile();

Let’s go over a very rough example of this in action. Here’s the participating class:

public class Example
    public string Label
    { get; private set; }

    public void Map(PropertyMap<Example> map)
        map.Map(() => Label);

And here’s an example of invoking that class:

PropertyMap<Example> map = new PropertyMap<Example>
                    { x => x.Label, "This works!" }

Example example = new Example();


// The string variable below will be set to "This works!"
string label = example.Label;

I was initially somewhat surprised that this worked, we are basically changing the x => x.Label expression into a (x,value) => x.Label = value expression. Obviously this all fails if there is no set accessor at all.

The reasons why this works is something I’ll cover in another post in the future.

Should I Use This in My Code?

If you are unsure and have to ask me, then no, you shouldn’t use this in your code. For most cases where you could use this, you would be at best clever, and at worst irresponsible; however, for some situations, this could be considered an appropriate approach. Going into what those specific situations might be is far and beyond outside the scope of this article.


If there is one piece of an Exchange or Outlook appointment that is as important as it tends to be misunderstood, it would certainly have to be the Organizer property.

This lovely little fellow returns a string which represents the name of the individual who owns the appointment, or, as Microsoft puts it in the Outlook Object Model (OOM) documentation, it is “…the name of the organizer of the appointment.” As helpful as that is, what does it even mean for one to be an organizer of an appointment? I’m going to cover these questions as well as go over why it is very dangerous to rely on this property in your product’s code.

What is an Organizer?

Exchange server protocol documentation defines the organizer as “the owner or creator of a conference or event.”

I hate to contradict official Exchange white papers, but although the organizer is indeed the owner of an event, they are not necessarily always the “creator”. A delegate for a manager can create an appointment on that manager’s calendar, however the delegate will not be marked as the organizer; rather, the manager will be the organizer. You may say, “Well, the delegate carries out actions in the manager’s name.” which is not entirely correct, because a normal delegate actually very much retains their own identity, and this is reflected by some various properties which will reflect the delegate’s name on the appointment.

Let’s do a better job in defining what an organizer is: The organizer of an appointment can be described as the owner of the calendar in which an appointment is first created. By saying “owner of the calendar”, I’m referring to the individual for which the mailbox store containing the calendar belongs to. The organizer is commonly referred to (by Microsoft and others) as the meeting organizer, however it is important to remember that not all appointments are meetings, but all appointments have an organizer. This fact is important in the very definition of what a meeting is, which is an appointment that is extended “…to contain attendees in addition to the organizer.” [MS-OXOCAL -]

So what’s the importance of being an organizer? What special privileges or capabilities does one acquire when donning the Organizer cap? What are some important limitations?

  • The fact that an individual is an organizer is displayed to attendees in the meeting request; this alone is probably one of the most important consequences of being the organizer, at least as far as day-to-day business dealings are concerned.
  • The organizer is the individual that will receive all responses to any meeting requests sent by or through the organizer; that is, unless it has been configured for all responses to go to a delegate instead.
  • Once an appointment object has been saved, you cannot change who the organizer is (or so they say). This little factoid is probably one of the first bits of information developers or users may encounter when inquiring about the nature of the appointment’s organizer. In fact, I’ll bet that’s the reason you’re reading this very article.
  • Other than some special do’s and don’ts that one must follow when working with appointments in the very low levels of things, there’s really not much else.

While there are certainly other actions an organizer can take that a normal attendee typically cannot (such as adding an attendee, removing an attendee, etc.), all of these types of actions are wholly dependent on the permissions and type of access the individual has in relation to the original appointment item; they have nothing to do with being the organizer. This is why a delegate can do everything an organizer can.

Regardless of the actual limited number of automatic perks that come with being an organizer, many third-party solutions out there tend to use the fact that someone is an organizer as a simple way to see if the user owns and thus can modify/mess around with an appointment. Although nothing is easier than checking if the current user’s name matches the string returned by the Organizer property, this is a dangerous way to proceed for a number of reasons we’ll get into later.

How is the Organizer’s Identity Determined?

Folks who work with Outlook solutions generally refer to the OOM’s Organizer property of an appointment item to determine who the organizer is. As I’ve stated before, this property returns a string representing the name of the organizer. So…is that it then? Is the identity of the organizer persisted through a simple string property on the appointment object? Where does this Organizer property actually get its value?

These are all questions with, surprisingly, unclear answers. While some of the above is addressed by Microsoft on the MSDN, they do not, in particular, (from what I’ve seen) seem to mention the exact origin of the value returned by the Organizer property anywhere.

It takes only simple deduction to realize that Outlook is aware of more about the organizer than simply the name of the organizer. For instance, create a new appointment on your calendar, and then click on the Scheduling Assistant ribbon button in the Show group. You’ll see something similar to the following:

Scheduling Assistant Displaying the Organizer

Scheduling Assistant Displaying the Organizer

If you look at the icon next to my name, you’ll that I’m the organizer of the meeting (the icon being the “organizer” icon and all). So, we see that it knows the name of the organizer. Isn’t that what we get by looking at the Organizer property?

Well my friends, you’ll be pleased (horrified) to know, that the name shown in the above picture and the value returned by the Organizer property do not come from the same place. In fact, they are not linked at all, the fact that they tend to be the same value is merely a consequence of actions taken by the client. It is indeed possible for them to differ in value (even if they originally were the same value) as well, however that’s outside the scope of this article.

Back on topic, if we take a closer look at the entry above, we’ll see that Outlook is actually aware of the true identity of the user that is marked as the organizer. You can see this simple by moving your mouse over the name. This will result in something similar to the image below:

Scheduling Assistant Displaying the Organizer with Additional Information

Scheduling Assistant Displaying the Organizer + Additional Information

Simply based on the fact that we’re seeing the card pictured above should tell us that we are looking at the user’s actual address entry as well as some other information. Indeed, clicking on one of the icons in the above card will lead us to the user’s contact details.

What does this all tell us? This tells us that Outlook and Exchange are intimately aware of the identity of the organizer of the appointment; they do not rely simply on a literal name for the organizer. They are aware of enough information regarding the organizer to be able to pull the contact information for the user. What does that require? It requires the entry ID of the entry for the user in an address book (i.e. the GAL).

So where is the organizer’s information stored in the appointment, and in what form? Roughly speaking, information regarding the organizer is stored in two separate places, one of them being, in my opinion, the most official of the two as far as what makes some an actual “organizer”; however, each has its own set of consequences realized with regards to the user indicated as being the organizer. Let’s go over those two places that collectively determine who the organizer is.

I: The Recipient Table

The designation of the organizer in a meeting’s recipient PropertyRowSet is what, I believe, actually decides who the organizer is, as it has the greatest amount of influence on the functional implications of having the status of being the organizer.

A PropertyRowSet is a data structure discussed in the [MS-OXCDATA] Exchange server protocol document is simply a structure that has a counted number of PropertyRow structures. Essentially, it is a table where the number of columns in each row is not exactly set in stone (sounds like something right up Microsoft’s alley). When discussing a recipient table, these PropertyRow structures are actually RecipientRow structures, which are also introduced in the [MS-OXCDATA], but expanded upon in [MS-OXOCAL -], which begins to go over the actual properties one may encounter in an entry in the recipient table.

By the way, I love how Microsoft introduces the concept of a RecipientRow: “It is rather complex but can be considered as a sequence of three different parts…”

When we step outside of the design and into the more real-world side of things, we’ll find implementations of the recipient table on any type of message, including appointments and meetings. We can retrieve the recipient table by making an IMAPIProp::OpenProperty call, specifying PR_MESSAGE_RECIPIENTS for the property tag and IID_IMAPITable for the interface identifier. You’ll be able to do this more easily by using either OutlookSpy, MFCMapi, which all have options named something similar to “Open Recipients Table”. I’m not sure how easy it is to get to what we want her from the OOM, or even Redemption, but it should be accessible using the appropriate property tags (at least in Redemption’s case…I try to avoid the OOM when doing anything off the beaten path).

Inside the recipient table, we’ll find a RecipientRow for every single attendee to the meeting, including one for the organizer. On each row is one particular property of importance, and that is the PR_RECIPIENT_FLAGS property. You can find information regarding the specifics of this property in the [MS-OXOCAL], however I’ll help you out a bit and plop a picture relevant to what we’re talking about below:

Structure of the PR_RECIPIENT_FLAGS Property

Structure of the PR_RECIPIENT_FLAGS Property

We can see above a number of bit fields we can set. The bit we’re interested in is the second bit, which is known as the recipOrganizer bit. Setting this bit to 1 makes the user associated with the RecipientRow that this property is in the de facto organizer of the appointment. This then implies that only a single RecipientRow can have this bit set. I’m not sure what’ll happen if more than one does have it set, you’ll have to build a custom MAPI client to find out.

So, in other words: the organizer is officially determined to be the user whose has the recipOrganizer bit set in their associated RecipientRow in the appointment’s recipient table. There are other properties in the RecipientRow besides the recipient flags, including one or more that store the recipient’s entry ID. No doubt, then, it is through the user’s RecipientRow that Outlook is able to retrieve the kind of information we were being treated to earlier when hovering our mouse over our name in the Scheduling Assistant.

Unfortunately, this is not the entire story. Despite the designation via the user’s RecipientRow being the official way of becoming the organizer, it may surprise you to know that the Organizer property returned by the OOM (I can’t speak as for Redemption, didn’t test it out), which is the most common means of exposure of the organizer’s identity to the outside via an API (in the case of an Outlook solution, at least) is in no way influenced by the recipOrganizer bit, the PR_RECIPIENT_FLAGS property, or any at all to do with the recipient table!

This leads us to the second place an organizer is “set”.


The value returned by the Organizer property does not originate from the appointment’s recipient table at all. Instead, it actually comes from the PR_SENT_REPRESENTING_NAME property, found on the appointment itself. So, there you go…that’s where the money is.

You’re probably already aware (from being told so somewhere else) that you cannot change the organizer. Consequentially, the Organizer property is read-only. You cannot set the Organizer property. Well, while it is more or less true that you cannot change who the real organizer is, you can have influence on what Organizer returns.

It may surprise you to know that by changing the value of PR_SENT_REPRESENTING_NAME, you will change what the Organizer property returns. Some places in Outlook do use this property to display the organizer’s name, so you will influence those areas. In some places, however, Outlook defers to the display name stored in the recipient table. It does not seem possible to edit a RecipientRow once a message has been saved; at least, I am unable to do so using OutlookSpy and MFCMapi. Perhaps it is still technically possible to with the Redemption library or raw Extended MAPI. I’d be interested to know if anyone has any insight on that.

Here’s some code to demonstrate how to change the Organizer property in Outlook (when we’re in Outlook, we have to get the correct DASL property name):

private void ChangeOrganizer(AppointmentItem appointment)
  PropertyAccessor accessor = appointment.PropertyAccessor;

  accessor.SetProperty("", "Matt Weber");

  // 'appointment.Organizer' will now return "Matt Weber".

And here’s some code to do the same with Redemption, which is a little bit simpler:

private void ChangeOrganizer(RDOAppointmentItem appointment)
  appointment.SentOnBehalfOfName = "Matt Weber";

An interesting thing to note is that the PR_SENT_REPRESENTING_NAME property can be found on both appointment and normal mail messages, and thus can be set on either. Regardless, only appointment objects will have an actual “organizer”. Changing the PR_SENT_REPRESENTING_NAME property on an email will change the name of the sender of the email.

Why Relying on the Organizer Property is Dangerous

Seeing how the Organizer (and thus the PR_SENT_REPRESENTING_NAME property) is not actually (guaranteed to be) based on who the real organizer is, relying on it in your program code can be very dangerous. It is really a bad idea, and I’ll provide a very real-world example showing why.

If you cater to customers that belong to the corporate world, you should be aware of and always plan on the possibility of a merger or acquisition involving that customer occurring. Many times, during the process of the merger of acquisition, company-wide changes to the display names of their users will be rolled out via Active Directory. Perhaps the company is now becoming a more “world-wide” company, and thus needs to append the name of a country to all employees’ display names based on the location of their office.

When changes are made to User objects in Active Directory, those changes are going to eventually propagate to Exchange’s Global Address List via a management agent. This means, then, that those changes will be reflected in all address entry objects returned when operating within an Outlook or Exchange environment. Guess what happens to the values returned by properties like PR_SENT_REPRESENTING_NAME and, ultimately, the OOM’s Organizer property? Nothing! They will remain the same; they are not synchronized or hooked up to Active Directory in any way shape or form.

So what will happen then, if you happen to have code responsible for determining if the current user is the organizer of an appointment, which happens to use current Active Directory information for figuring out who the current user is, and the appointment’s Organizer property for figuring out who the organizer is? It won’t work anymore, because they will be completely out of sync.

Therefore, it is very dangerous to rely on, frankly, any string property relating to users exposed by MAPI’s data model.

What’s the alternative here?

The proper way to establish organizer identity is to look past the OOM’s Organizer property and PR_SENT_REPRESENTING_NAME property, and instead use the PR_SENT_REPRESENTING_ENTRYID property (which is exposed by Redemption with the SentOnBehalfOfEntryId property) to retrieve the address entry of the user. This method will survive abrupt, organization-wide changes committed to Active Directory.

The best alternative, however, would be to defer to the recipient table. You would need to go over each RecipientRow until you located the one with the recipOrganizer bit set, and then use the PR_ENTRYID property located in that RecipientRow in order to open up the user’s associated address entry. Note that there is also a PR_RECIPIENT_ENTRYID which may be available and I am not sure as to what the different might be between the two properties; in all instances I’ve observed, both the PR_ENTRYID and PR_RECIPIENT_ENTRYID properties had the same value.

© 2012-2013 Matt Weber. All Rights Reserved. Terms of Use.