Matt Weber

I'm the the Senior Software Architect at Emergingsoft where I lead the software development team. I am also the owner of this website. I enjoy well-designed code, independent thought, and the application of rationality in general. You can reach me at matt@badecho.com.

 

Office add-ins typically feature a specific type that is truly the star of the show, with its responsibilities including things such as facilitating the connection to the target product, extending the Office ribbon, and more. We typically refer to this type as the Connect class, and given the amount of responsibilities it can potentially acquire, it can easily turn into a very large, bulky, and perhaps slightly Lovecraftian entity.

When such a thing becomes reality for your add-in, you may start to wonder if it is at all possible to break up these responsibilities into separate types; indeed, separation is possible, and this article will go into how. Understanding the solution however, of course requires understanding the actual problem at hand.

Before we start, however, I would note that if you use VSTO to develop your add-in, this article most likely does not apply to you. I can’t be sure, as I stay very far away from the thing. If interfaces such as IDTExtensibility2 and IRibbonExtensibility appear alien to you, then I can assure you that reading any more after this point would only serve to consume your limited time.

The Problem

The Parties Involved

An add-in’s implementation of the IDTExtensibility2 interface serves as its connection to the Office product it targets. This is why we typically use Connect as the name for the implementing class. Without it, our add-in has no way of being loaded/unloaded by the host Office application.

If we want our add-in to improve upon Office’s ribbon, then we must implement the IRibbonExtensibility automation interface. This is used by Office in order to load the custom ribbon XML markup from the add-in and execute callbacks via the use of IDispatch::GetIDsOfNames and IDispatch::Invoke. Because it is Office that makes use of these methods, then it is manifest that Office be able to locate its implementation, and the method in which it does so may be unclear to those coming from a strictly .NET/managed background.

In order to allow for our add-in to extend the ribbon, the usual course of action for those of us developing in a managed environment is to simply add a IRibbonExtensibility implementation to our Connect class. Office will pick up on this just fine (although many of us at the time may not be very sure as to how it does so) resulting in our envisioned custom ribbon making it onto the live application instance.

Let’s not forget that Office has a few other (albeit less common) extensibility points we may be interested in, such as the ability to provide custom take panes through the use of the ICustomTaskPaneConsumer interface. The prime candidate for implementation of such interfaces will yet again most likely need to be our trusty Connect class.

A Growing Manifestation

We like to keep our code clean and compartmentalized, with each module or class responsible for as few things as possible. This desire will quickly become an impossible dream to many when we start dabbling in areas such as extension of the Office ribbon.

If we’re dealing with anything more than the simplest of add-ins, we’re going to start to see a very large number of callbacks being added to our Connect class. In the end, our Connect class will be very large, with potentially lots of responsibilities.

So it may come to be that you want to separate your add-in entry point code (IDTExtensibility2) from your ribbon extension code (IRibbonExtensibility) into two separate entities. If you know little of the mechanisms in use by Office in order to tap into your IRibbonExtensibility implementation, this may seem nigh impossible at first.

If we do some blind digging around how Outlook seems to be finding and interacting with our add-in, we can see that the scant registration data pertaining to our add-in makes no mention of its ribbon extending capabilities; in most cases, the only thing related to our add-in that is registered with the system is our Connect class.

Office is a strictly unmanaged beast. Consequently, it certainly doesn’t make use of .NET Reflection APIs in order to figure out if your add-in’s entry point implements IRibbonExtensibility. So, how does Office know if your add-in extends its ribbon?

COM-mon Knowledge, Baby

Hopefully you wouldn’t be surprised if I told you that the add-in you’ve been developing is a COM add-in. Indeed, your add-in is an in-process COM server that implements the IDTExtensibility2 interface as described in Msaddndr.dll (which just rolls off the tongue, I know).

So, hopefully you don’t feel it to be a very large leap of faith to go a bit further here and extrapolate that Office is making use of the basic set of tools native to COM in order to do all of its digging (I’d think the mention of the IDispatch interface earlier might have been a bit of a giveaway).

Your Connect class (well, actually the IDTExtensibility2 implementation found in your unmanaged shim, you are using a shim right?) is actually registered as an in-process server for your add-in via an InProcServer32 registry key. Office will get an interface pointer to the IUnknown of the object that implements IDTExtensibility2, and then use that pointer to QueryInterface for any subsequent interfaces (such as IRibbonExtensibility, etc.).

That’s all we need to know.

The Solution

If we want the responsibilities of extending Office’s ribbon delegated to another class, we need to return a pointer to that class when queried. Before we get into how we do that, let’s put what we’re working with on the table first.

We’ll be working with two managed classes: one which provides the point of connection to Office (Connect) and one which extends the Office ribbon (Ribbon).

Connect.cs
[ComVisible(true)]
[ProgId(CONNECT_PROG_ID)]
[Guid("5842D85C-3E55-423F-A114-9D3368EBC64F")]
[ClassInterface(ClassInterfaceType.None)]
public sealed class Connect: IDTExtensibility2
{
.
.
.
}
Ribbon.cs
[ComVisible(true)]
public sealed class Ribbon : IRibbonExtensibility
{
.
.
.
}

With these two classes, we can separate our connection logic from our ribbon logic. The tough part is getting Office to make use of our Ribbon class; as was stated previously, by default, the COM shim wizards require the Connect class to implement c>IRibbonExtensibility in addition to IDTExtensibility2 in order for us to be able to influence the Office ribbon.

So, naturally, the changes we’ll need to make will be to our unmanaged COM shim. The various file/class names I’ll be referencing here should be the ones generated by the Office COM shim wizard; if I’m making references to files that appear wholly foreign to you, let me know in the comments, as there is a chance that I’ve renamed them.

The first file we’ll be modifying is the header file for our outer aggregator (IOuterAggregator). This is the interface implemented by our ConnectProxy class and provided to a managed component so that the managed component can supply the shim with references to the various managed instances it either needs to use in response to queries or forward those queries to.

By default, the outer aggregator is used to supply a reference to only a single managed component, that being our Connect class. We need to be able to supply an additional reference to our Ribbon class, so we modify its only method’s signature to accept an additional IUnknown.

IOuterAggregator.h
__interface __declspec(uuid("7b70c487-b741-4973-b915-c812a91bdf63"))
IOuterAggregator : public IUnknown
{
	HRESULT __stdcall SetInnerPointers(IUnknown *pUnkRibbon, IUnknown *pUnkBlind);
};

Don’t forget that there is also a managed COM import declaration of this type in either your add-in or (if you didn’t combine the two) your external “managed aggregator”. This will also need to be updated.

IOuterAggregator.cs
[ComImport]
[Guid("7B70C487-B741-4973-B915-C812A91BDF63")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
internal interface IOuterAggregator
{
    void SetInnerPointers(IntPtr pUnkRibbon, IntPtr pUnkBlind);
}

The next file we’ll be modifying is the header file for the class that acts as the proxy between Office and our managed components (ConnectProxy). The first modification we need to make is the addition of a separate pointer that will hold (indirectly of course) a reference to our managed Ribbon instance. We can simply add a new IUnknown declaration to the private section of the header file. In the end, we should have something like the following:

ConnectProxy.h
.
.
.
private:
    IDTExtensibility2 *_pConnect;
    CLRLoader *_pCLRLoader;
    IUnknown *_pUnkBlind;
    IUnknown *_pUnkRibbon;
};

OBJECT_ENTRY_AUTO(__uuidof(ConnectProxy), ConnectProxy)

Above, we can see the new IUnknown declaration for our ribbon. And before you yell at me about using Hungarian notation, know that the code generated by the wizard uses COM notation, and I continue to do so as well so as to remain consistent.

Next we need to make some changes to the COM map declared in this header file. Normally, there is no mention of IRibbonExtensibility2 in our COM map, even if the COM shim wizard has been configured to support IRibbonExtensibility2, and that’s because of the following line:

COM_INTERFACE_ENTRY_AGGREGATE_BLIND(_pUnkBlind)

This macro essentially causes queries for IIDs that do not match previous COM map entries to be forwarded to the provided pointer. So any queries for IRibbonExtensibility2 would hit up _pUnkBlind, which essentially points to the same thing pointed to by _pConnect.

Since we want our _pUnkRibbon instance to be forwarded ribbon related queries, we’ll need to it to the COM map. However, just adding an entry for IRibbonExtensibility2 is not enough; an entry for the IDispatch interface is also required. This is because the IDispatch interface provides the mechanic used for when we need to add callbacks to the various elements of a ribbon. Office sees that it needs to call a method named GetButtonLabel in order to get a button’s label, so it will query for IDispatch and use that to find a method named GetButtonLabel. Remember, we’re in unmanaged land, there’s no Reflection API to help us here.

Here’s what our COM map will look like when we’re done:

ConnectProxy.h
.
.
.
public:
.
.
.
BEGIN_COM_MAP(ConnectProxy)
    COM_INTERFACE_ENTRY(IDTExtensibility2)
    COM_INTERFACE_ENTRY(IOuterAggregator)
	COM_INTERFACE_ENTRY_AGGREGATE(__uuidof(IRibbonExtensibility), _pUnkRibbon)
	COM_INTERFACE_ENTRY_AGGREGATE(__uuidof(IDispatch), _pUnkRibbon)
    COM_INTERFACE_ENTRY_AGGREGATE_BLIND(_pUnkBlind)
END_COM_MAP()

The final change we need to make to this header file is related to the changes we made to the IOuterAggregator interface. Because ConnectProxy implements said interface, we need to propagate the changes we made to it to the proxy’s header file.

ConnectProxy.h
.
.
.
public:
.
.
.
STDMETHOD(SetInnerPointers)(IUnknown* pUnkRibbon, IUnknown* pUnkBlind);
.
.
.

That’s it as far as the header file is concerned. Next, because we made changes to the header file, we’re going to make some changes to the actual class file. We added a new member to the class, so be sure to initialize the _pUnkRibbon pointer to null in the constructor’s initializer list, as well as add the necessary clean up logic for it in the FinalRelease method.

Other than those routine concerns, the main changes will be to the SetInnerPointers method. It needs to reflect the updated declaration we committed to the IOuterAggregator.h file, as well as store the additional ribbon pointer being provided to us.

ConnectProxy.cpp
HRESULT __stdcall ConnectProxy::SetInnerPointers(IUnknown* pUnkRibbon, IUnknown* pUnkBlind)
{
	if (pUnkRibbon == NULL || pUnkBlind == NULL)
	{
        return E_POINTER;
    }

    if (_pUnkRibbon != NULL || _pUnkBlind != NULL)
    {
        return E_UNEXPECTED;
    }

	_pUnkRibbon = pUnkRibbon;
	_pUnkRibbon->AddRef();

    _pUnkBlind = pUnkBlind;
    _pUnkBlind->AddRef();

    return S_OK;
}

The last class we need to change is our managed implementation of the IInnerAggregator class. This is where the pointer to our now separate ribbon type will get provided to the shim, so we need to add the work required in order to do this to it.

InnerAggregator.cs
[ClassInterface(ClassInterfaceType.None)]
[ProgId("Your.ProgId")]
internal sealed class InnerAggregator : IInnerAggregator
{
    /// <inheritdoc/>
    public void CreateAggregatedInstance(IOuterAggregator outerObject)
    {
        IntPtr pOuter = IntPtr.Zero;
        IntPtr pBlindInner = IntPtr.Zero;
        IntPtr pRibbonInner = IntPtr.Zero;

        try
        {
            pOuter = Marshal.GetIUnknownForObject(outerObject);

            Connect connect = new Connect();
            Ribbon ribbon = new Ribbon();

            pBlindInner = Marshal.CreateAggregatedObject(pOuter, connect);
            pRibbonInner = Marshal.CreateAggregatedObject(pOuter, ribbon);

            outerObject.SetInnerPointers(pRibbonInner, pBlindInner);
        }
        finally
        {
            if (pOuter != IntPtr.Zero)
                Marshal.Release(pOuter);
            if (pBlindInner != IntPtr.Zero)
                Marshal.Release(pBlindInner);
            if (pRibbonInner != IntPtr.Zero)
                Marshal.Release(pRibbonInner);

            Marshal.ReleaseComObject(outerObject);
        }
    }
}

And that does it. Any ribbon related queries made by Office should be correctly redirected to your managed Ribbon class, and connection related activities will continue to be covered by the managed Connect class. No additional registry entries need to be made relating to all of this, everything will be handled by the additional code we added. Enjoy slightly more neat and compartmentalized Office add-in code.

 

In a previous article I wrote, I introduced a way to compare expressions on the basis of their makeup as opposed to simple reference equality checks. This functionality was provided in the form of an IEqualityComparer<T> implementation, which itself was supported by several components.

Like all equality comparers, our expression equality comparer must be able to perform two functions:

  1. Determine the equality between two given expression instances
  2. Generate a hash code for a given expression instance

The previous article covered the first of these jobs; in this article, we’ll be covering the hash code generation aspect of the comparer.

Revisiting Our Equality Comparer

Just to make sure we’re all on board in regards to how and where our hash code generator will be used, let’s take a second to revisit the equality comparer presented in the previous article.

ExpressionEqualityComparer.cs
/// <summary>
/// Provides methods to compare <see cref="Expression"/> objects for equality.
/// </summary>
public sealed class ExpressionEqualityComparer : IEqualityComparer<Expression>
{
    private static readonly ExpressionEqualityComparer _Instance = new ExpressionEqualityComparer();

    /// <summary>
    /// Gets the default <see cref="ExpressionEqualityComparer"/> instance.
    /// </summary>
    public static ExpressionEqualityComparer Instance
    {
        get { return _Instance; }
    }

    /// <inheritdoc/>
    public bool Equals(Expression x, Expression y)
    {
        return new ExpressionComparison(x, y).ExpressionsAreEqual;
    }

    /// <inheritdoc/>
    public int GetHashCode(Expression obj)
    {
        return new ExpressionHashCodeCalculator(obj).Output;
    }
}

As we can see above, the brunt of the equality comparison work is performed by the ExpressionComparison class, which is a type that we covered in the first article in this series.

If you look at the code for the ExpressionComparison type, you’ll see that it is a derivative of the .NET provided ExpressionVisitor class. The reason why ExpressionComparison was subclassed from ExpressionVisitor is because hooking into that infrastructure is logically congruent with the structure of the expressions it would be comparing.

In order to take into account all the various types of expressions and their properties, we needed to override the majority of the virtual (Visit[x]) methods exposed by ExpressionVisitor. We did not have to override all of them, only the ones targeting expression types whose makeup was unique among all other types.

Just like we did with ExpressionComparison, our ExpressionHashCodeCalculator will also subclass ExpressionVisitor, and it will behave in much the same way, except it will be calculate a running total of the hash codes of all the various properties significant to the given type of expression.

Expression Hash Code Calculator

Now, let’s get into the meat of the matter. As usual, I need to state a disclaimer regarding the usage of the code before we go over it. Although I’ve personally tested all part of the code, I would encourage you to do so yourself before just plopping into what you’re doing.

This code has received a fair amount of scrutiny and is used in some important parts of a larger project I’ve been authoring. In fact, the hash code generation aspect of my comparer rests at the heart of the reasons why I started out on this endeavor (it was my wish to be able to use expressions as keys in a dictionary).

We’ll first take a look at the code for the ExpressionHashCodeCalculator type, and then discuss its merits afterwards.

ExpressionHashCodeCalculator.cs

Update: Thanks to Denis in the comments for pointing out that there was a lack of support for constant collections; code has now been updated to support constant collections.

/// <summary>
/// Provides a visitor that calculates a hash code for an entire expression tree.
/// This class cannot be inherited.
/// </summary>
public sealed class ExpressionHashCodeCalculator : ExpressionVisitor
{
    /// <summary>
    /// Initializes a new instance of the <see cref="ExpressionHashCodeCalculator"/> class.
    /// </summary>
    /// <param name="expression">The expression tree to walk when calculating the has code.</param>
    public ExpressionHashCodeCalculator(Expression expression)
    {
        Visit(expression);
    }

    /// <summary>
    /// Gets the calculated hash code for the expression tree.
    /// </summary>
    public int Output
    { get; private set; }

    /// <summary>
    /// Calculates the hash code for the common <see cref="Expression"/> properties offered by the provided
    /// node before dispatching it to more specialized visit methods for further calculations.
    /// </summary>
    /// <inheritdoc/>
    public override Expression Visit(Expression node)
    {
        if (null == node)
            return null;

        Output += node.GetHashCode(node.NodeType, node.Type);

        return base.Visit(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitBinary(BinaryExpression node)
    {
        Output += node.GetHashCode(node.Method, node.IsLifted, node.IsLiftedToNull);

        return base.VisitBinary(node);
    }

    /// <inheritdoc/>
    protected override CatchBlock VisitCatchBlock(CatchBlock node)
    {
        Output += node.GetHashCode(node.Test);

        return base.VisitCatchBlock(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitConstant(ConstantExpression node)
    {
        IEnumerable nodeSequence = node.Value as IEnumerable;

        if (null == nodeSequence)
            Output += node.GetHashCode(node.Value);
        else
        {
            foreach (object item in nodeSequence)
            {
                Output += node.GetHashCode(item);
            }
        }

        return base.VisitConstant(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitDebugInfo(DebugInfoExpression node)
    {
        Output += node.GetHashCode(node.Document,
                                        node.EndColumn,
                                        node.EndLine,
                                        node.IsClear,
                                        node.StartColumn,
                                        node.StartLine);

        return base.VisitDebugInfo(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitDynamic(DynamicExpression node)
    {
        Output += node.GetHashCode(node.Binder, node.DelegateType);

        return base.VisitDynamic(node);
    }

    /// <inheritdoc/>
    protected override ElementInit VisitElementInit(ElementInit node)
    {
        Output += node.GetHashCode(node.AddMethod);

        return base.VisitElementInit(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitGoto(GotoExpression node)
    {
        Output += node.GetHashCode(node.Kind);

        return base.VisitGoto(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitIndex(IndexExpression node)
    {
        Output += node.GetHashCode(node.Indexer);

        return base.VisitIndex(node);
    }

    /// <inheritdoc/>
    protected override LabelTarget VisitLabelTarget(LabelTarget node)
    {
        Output += node.GetHashCode(node.Name, node.Type);

        return base.VisitLabelTarget(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitLambda<T>(Expression<T> node)
    {
        Output += node.GetHashCode(node.Name, node.ReturnType, node.TailCall);

        return base.VisitLambda(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitMember(MemberExpression node)
    {
        Output += node.GetHashCode(node.Member);

        return base.VisitMember(node);
    }

    /// <inheritdoc/>
    protected override MemberBinding VisitMemberBinding(MemberBinding node)
    {
        Output += node.GetHashCode(node.BindingType, node.Member);

        return base.VisitMemberBinding(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitMethodCall(MethodCallExpression node)
    {
        Output += node.GetHashCode(node.Method);

        return base.VisitMethodCall(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitNew(NewExpression node)
    {
        Output += node.GetHashCode(node.Constructor);

        return base.VisitNew(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitParameter(ParameterExpression node)
    {
        Output += node.GetHashCode(node.IsByRef);

        return base.VisitParameter(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitSwitch(SwitchExpression node)
    {
        Output += node.GetHashCode(node.Comparison);

        return base.VisitSwitch(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitTypeBinary(TypeBinaryExpression node)
    {
        Output += node.GetHashCode(node.TypeOperand);

        return base.VisitTypeBinary(node);
    }

    /// <inheritdoc/>
    protected override Expression VisitUnary(UnaryExpression node)
    {
        Output += node.GetHashCode(node.IsLifted, node.IsLiftedToNull, node.Method);

        return base.VisitUnary(node);
    }
}

As you can see from the above code, the general method for calculating the hash code of a given expression is by visiting all its manifestations and parts and then generating hash codes from the properties significant to those parts.

The GetHashCode methods being invoked in the code are not native to the types they are being invoked on. Rather, they are extension methods, and are something I talked about in another article I wrote.

Each part of the calculate tacks on the total value of the various parts to an Output property, which holds the running total for our hash code. Calculation will end upon our Visit override being executed with a null node being passed.

There are many virtual Visit[x] methods offered by the ExpressionVisitor type. As I stated in the first article of this series, there were in general a large number of new types of expressions added to the .NET framework with 4.0.

When creating our ExpressionComparison class, we overrode many of these methods, but only the ones that held some bearing on the actual shape of the expression as far as an equality check was concerned. The same applies to our hash code calculator; it visits many of the same parts as was visited by the equality comparer, with some differences, namely, some overrides found in ExpressionComparison are not found in ExpressionHashCodeCalculator, and vice versa.

The reasons for not overriding a particular virtual method typically boil down to a lack of properties offered from which to grab hash codes that wouldn’t be offered in another, more centrally invoked override.

For example, one virtual method not overridden is the VisitBlock, method. This method accepts a single BlockExpression typed parameter. If we look at the BlockExpression type, we’ll notice that it offers no additional properties unique to what’s offered by more base Expression types. Well…at least except for the Result property. But even the presence of this property is not cause enough for us to override the method, the reason being that the Result property itself (which is an Expression) is visited by the base VisitBlock implementation, and therefore would be end up being visited by another block of our code anyways.

There are a few more methods not included, but I’ll leave them as an exercise for the reader.

If anyone finds any types of expressions that the above code does not account for, I’d appreciate your input. When constructing this code, however, I tried to be fairly exhaustive in my efforts.

 

This article is meant to serve as a reference for a particular set of functions that may be present in code snippets found in subsequent articles.

Every so often one may find themselves tasked with writing hash code generation algorithms for a particular type of object. This sort of requirement typically arises whenever we’re authoring either a value type or an implementation of an interface which requires such functionality (e.g. IEqualityComparer<T>).

While the way hash codes end up being calculated tends to differ between object types, hash code generation mechanisms can only be considered proper if the following requisites are met:

  1. If two objects are deemed equal, then the hash code generation mechanism should yield an identical value for each object.
  2. Given an instance of an object, its hash code should never change (thus using mutable objects as hash keys and actually mutating them is generally a bad idea).
  3. Although it is acceptable for the same hash code to be generated for objects instances which are not equal, the way in which the hash code is calculated should be such so that these kinds of collisions are as infrequent as possible.
  4. Calculation of the hash code should not be an expensive endeavor.
  5. The generation mechanism should never throw an exception.

Naturally, it makes sense to abstract the steps involved in satisfying such requirements into an independent function which can be used by any kind of object. Unfortunately, for the most part, it simply isn’t possible to guarantee the satisfaction of  points #1 and #2 with common code. We can, however, build something that contributes towards the success of #3.

To do this, we can create a set of functions that calculate a hash code based on the property values provided to them, with each function differing in the number of properties that they accept. A static helper class could serve as a home for these functions, but since I like to avoid static helper classes whenever possible, I thought it prudent to craft them as extension methods instead, such as the one shown below:

GetHashCode<TFirstProperty,TSecondProperty>
/// <summary>
/// Calculates the hash code for an object using two of the object's properties.
/// </summary>
/// <param name="value">The object we're creating the hash code for.</param>
/// <typeparam name="TFirstProperty">The type of the first property the hash is based on.</typeparam>
/// <typeparam name="TSecondProperty">The type of the second property the hash is based on.</typeparam>
/// <param name="firstProperty">The first property of the object to use in calculating the hash.</param>
/// <param name="secondProperty">The second property of the object to use in calculating the hash.</param>
/// <returns>
/// A hash code for <c>value</c> based on the values of the provided properties.
/// </returns>
/// ReSharper disable UnusedParameter.Global
public static int GetHashCode<TFirstProperty,TSecondProperty>([NotNull]this object value,
                                                              TFirstProperty firstProperty,
                                                              TSecondProperty secondProperty)
{
    unchecked
    {
        int hash = 17;

        if (!firstProperty.IsNull())
            hash = hash * 23 + firstProperty.GetHashCode();
        if (!secondProperty.IsNull())
            hash = hash * 23 + secondProperty.GetHashCode();

        return hash;
    }
}

Note: IsNull is another extension method that properly deals with checking if an instance is null when we don’t know whether we’re dealing with a value or reference type.

We’re using two prime numbers (17 and 23) for our seeds to aid in the effort of reducing collisions. It can be argued that they are not optimal; however, that would most likely end up being a weak argument, and such a discussion is not even in the scope of this article. The values originate from other sources out there that address the issue of collisions (e.g. stackoverflow.com). We have the code inside an unchecked block so that overflows do not result in exceptions.

Again, this could easily be placed into a helper class instead; regardless, I wanted to tap into the portability offered by extension methods. Also, given that all objects have a GetHashCode method, I don’t view the extension of the object type in this manner as harmful, even if we aren’t actually using the source object itself.

I have about six of these methods, with the number of parameters accepted ranging from one to six. All of this is neither earth shattering nor rocket science. As I stated at the beginning, I’m mainly sharing this with you, the reader, so I can refer to this if questions arise from their usage in articles subsequent to this one.

 

As I have stated elsewhere on this site, it is my intent to limit the scope of the articles I write to areas that fall within my field of expertise. Once again, however, I desire to break away from my usual routine to cover another legislative issue making the rounds in the current events sphere; namely, the bill recently passed by Congress to avert the “fiscal cliff” (H.R. 8).

Whenever a great deal of press is being generated by a piece of legislation, I always find it most informative and therapeutic to ignore most of the chatter and read the raw text of the bill yourself. Obviously, this will allow one to reach their own conclusions regarding the item(s) at hand.

If the first paragraph has not made this clear to you already: know that I am neither a lawyer nor an economist.

Objective and Scope

My intention at the onset of analyzing the newly made law was to determine exactly how the “fiscal cliff” was averted. I had studied this issue before when the Budget Control Act of 2011 was passed in order to understand the inner workings of the sequestration mechanism, and had gained such an understanding to an adequate degree. Thus, as soon as H.R. 8 was passed on January 2nd, I wanted to find out answers to the following questions:

  1. Were the automatic sequestration cuts avoided by reaching the budget goal within the parameters set forth in the Budget Control Act of 2011?
  2. Were the automatic sequestration cuts avoided simply by amending the nature of the budget goal’s enforcement?
  3. How does H.R. 8 impact the long term deficit reduction goal?

Analysis

The specific piece of legislation here is titled as the American Taxpayer Relief Act of 2012. It is a large document, however only a small portion of it is relevant to the objective at hand. Specifically, we’re going to be taking a look at the majority of Section 901, which makes modifications to various aspects of the sequestration mechanism.

It is an amendment to the same section in the Balanced Budget and Emergency Deficit Control Act of 1985 that was amended by the Budget Control Act of 2011 in order to (in part) create the much talked about automatic deficit reductions. Of course, because it is an amendment, simply reading it doesn’t tell you so much, as you should immediately see upon reading it.

Total Deficit Reduction Calculation Changes

The first paragraph we’ll be poring over is Section 901(a), which makes modifications to the enforcement mechanism of the automatic sequestration cuts.

Section 901(a) of the American Taxpayer Relief Act of 2012
(a) Adjustment- Section 251A(3) of the Balanced Budget and Emergency Deficit Control Act of 1985 is amended--
 (1) in subparagraph (C), by striking `and' after the semicolon;
 (2) in subparagraph (D), by striking the period and inserting` ; and'; and
 (3) by inserting at the end the following:
 `(E) for fiscal year 2013, reducing the amount calculated under subparagraphs (A) through (D) by $24,000,000,000.'.

Hmm, although incremental changelogs are swell, this isn’t very helpful without seeing the text that’s being amended. Remember, Section 251A didn’t even exist in the Balanced Budget and Emergency Deficit Control Act of 1985 until the Budget Control Act of 2011 was passed. Let’s take a look at the effective text of Section 251A(3) of the BCA as it existed before this most recent law, then:

Section 251A(3) of the Balanced Budget and Emergency Deficit Control Act of 1985 as amended by Section 302(a) of the Budget Control Act of 2011
(3) CALCULATION OF TOTAL DEFICIT REDUCTION- OMB shall calculate the amount of the deficit reduction required by this section for each of fiscal years 2013 through 2021 by--
 (A) starting with $1,200,000,000,000;
 (B) subtracting the amount of deficit reduction achieved by the enactment of a joint committee bill, as provided in section 401(b)(3)(B)(i)(II) of the Budget Control Act of 2011;
 (C) reducing the difference by 18 percent to account for debt service; and
 (D) dividing the result by 9.

So, because of the amendment put in place by the American Taxpayer Relief Act, this text now looks like:

Section 251A(3) of the Balanced Budget and Emergency Deficit Control Act of 1985 as amended by Section 901(a) of the American Taxpayer Relief Act of 2012
(3) CALCULATION OF TOTAL DEFICIT REDUCTION- OMB shall calculate the amount of the deficit reduction required by this section for each of fiscal years 2013 through 2021 by--
 (A) starting with $1,200,000,000,000;
 (B) subtracting the amount of deficit reduction achieved by the enactment of a joint committee bill, as provided in section 401(b)(3)(B)(i)(II) of the Budget Control Act of 2011;
 (C) reducing the difference by 18 percent to account for debt service;
 (D) dividing the result by 9; and for fiscal year 2013, reducing the amount calculated under subparagraphs (A) through (D) by $24,000,000,000.

Alright. So, the total deficit reduction prior to this most recent law being passed would have been equal to 82% of the difference between $1,200,000,000,000 and an unknown amount which we have to calculate, all divided by 9 (which I assume refers to the time frame of this deficit reduction plan). However, with the American Taxpayer Relief Act of 2012, that value is being reduced by $24,000,000 for just this year.

In order to figure out the significance of the $24,000,000,000, we’ll need to fill in the rest of this little puzzle here by taking a look at Section 401(b)(3)(B)(i)(II).

See? Isn’t reading the law fun? It’s like a bunch of GOTO statements that control your life!

Section 401(b)(3)(B) of the Budget Control Act of 2011
(B) REPORT, RECOMMENDATIONS, AND LEGISLATIVE LANGUAGE-
 (i) IN GENERAL- Not later than November 23, 2011, the joint committee shall vote on--
 (I) a report that contains a detailed statement of the findings, conclusions, and recommendations of the joint committee and the estimate of the Congressional Budget Office required by paragraph (5)(D)(ii); and
 (II) proposed legislative language to carry out such recommendations as described in subclause (I), which shall include a statement of the deficit reduction achieved by the legislation over the period of fiscal years 2012 to 2021.

Ah yes, we can see the Joint Select Committee on Deficit Reduction being referred to in subclause (II). Well, as we all know, the committee failed to reach an agreement and was terminated January 31st, 2012. So, I guess that value comes out to be “0″.

Using this knowledge, we now know that the previously required deficit reduction for 2012 was in the ballpark of $109,333,333,333. With the recent law being passed, that figure has dropped to around $85,333,333,333.

So we now know that the part of the law which essentially both triggered and determined the amount of the automatic cuts was changed to be “less severe”. While interesting, this does not, at least by itself, tell us how the “fiscal cliff” was averted. So we read on.

Postponement of the Sequestration Date

The next part of Section 901 is purely temporal in nature.

Section 901(b) and 901(c) of the American Taxpayer Relief Act of 2012
(b) After Session Sequester- Notwithstanding any other provision of law, the fiscal year 2013 spending reductions required by section 251(a)(1) of the Balanced Budget and Emergency Deficit Control Act of 1985 shall be evaluated and implemented on March 27, 2013.
(c) Postponement of Budget Control Act Sequester for Fiscal Year 2013- Section 251A of the Balanced Budget and Emergency Deficit Control Act of 1985 is amended--
 (1) in paragraph (4), by striking `January 2, 2013' and inserting `March 1, 2013'; and
 (2) in paragraph (7)(A), by striking `January 2, 2013' and inserting `March 1, 2013'.

Although nothing in the above paragraphs explicitly state that the point in time in which the automatic cuts would occur has been delayed, the amending of dates in the law would certainly seem to indicate so. The first paragraph mentions the date March 27th, 2013, and the second paragraph throws March 1st, 2013 at us.

The former is applied “globally”, whereas the latter replaces “January 2, 2013″ in the following texts:

Section 251A(4) of the Balanced Budget and Emergency Deficit Act of 1985 as amended by Section 302(a) of the Budget Control Act of 2011
(4) ALLOCATION TO FUNCTIONS.—On January 2, 2013, for
 fiscal year 2013, and in its sequestration preview report for
 fiscal years 2014 through 2021 pursuant to section 254(c), OMB
 shall allocate half of the total reduction calculated pursuant
 to paragraph (3) for that year to discretionary appropriations
 and direct spending accounts within function 050 (defense function)
 and half to accounts in all other functions (nondefense
 functions).
Section 251A(7)(A) of the Balanced Budget and Emergency Deficit Act of 1985 as amended by Section 302(a) of the Budget Control Act of 2011
(7) IMPLEMENTING DISCRETIONARY REDUCTIONS.—
 (A) FISCAL YEAR 2013.—On January 2, 2013, for fiscal
 year 2013, OMB shall calculate and the President shall
 order a sequestration, effective upon issuance and under
 the procedures set forth in section 253(f), to reduce each
 account within the security category or nonsecurity category
 by a dollar amount calculated by multiplying the
 baseline level of budgetary resources in that account at
 that time by a uniform percentage necessary to achieve—

So it is now quite apparent that the new law has changed the time at which the automatic cuts would occur for this year (and this year alone). While it is unclear to me as to the significance of throwing March 27th, 2013 into the mix, it is very clear that the “fiscal cliff” has not completely been averted, rather the point in time in which its true onus could be felt has merely been moved to March 1st, 2013.

Instead of happening now, the cuts, if they do happen, would occur on the aforementioned date. This would seem to further indicate that some additional legislation is required in order to avoid the automatic cuts; something which, in the political landscape today, is hardly guaranteed.

One Year Stagnation of Limits

Moving on, we see some adjustments being made to the discretionary appropriation limits for the current and ensuing fiscal years.

Section 901(d) of the American Taxpayer Relief Act of 2012
(d) Additional Adjustments-
 (1) SECTION 251- Paragraphs (2) and (3) of section 251(c) of the Balanced Budget and Emergency Deficit Control Act of 1985 are amended to read as follows:
 `(2) for fiscal year 2013--
 `(A) for the security category, as defined in section 250(c)(4)(B), $684,000,000,000 in budget authority; and
 `(B) for the nonsecurity category, as defined in section 250(c)(4)(A), $359,000,000,000 in budget authority;
 `(3) for fiscal year 2014--
 `(A) for the security category, $552,000,000,000 in budget authority; and
 `(B) for the nonsecurity category, $506,000,000,000 in budget authority;'.

You can find the dollar amounts being replaced by referring to the Budget Control Act of 2011, which was responsible for adding Section 251(c) to the 1985 bill.

Section 251(c) of the Balanced Budget and Emergency Deficit Control Act of 1985 as amended by the Budget Control Act of 2011
(c) DISCRETIONARY SPENDING LIMIT.—As used in this part,
 the term ‘discretionary spending limit’ means—
 (1) with respect to fiscal year 2012—
   (A) for the security category, $684,000,000,000 in new
budget authority; and
   (B) for the nonsecurity category, $359,000,000,000 in
new budget authority;
 (2) with respect to fiscal year 2013—
   (A) for the security category, $686,000,000,000 in new
budget authority; and
   (B) for the nonsecurity category, $361,000,000,000 in
new budget authority;

The most recent amendment effectively renders the discretionary spending limit in fiscal year 2013 to be identical to the limit in effect for 2012. I cannot offer any insight as to the “why” behind this, however, I can add that the spending limit, in the original text of the 2011 act, was meant to be increased each year through 2021.

It also defines the limits for the individual categories for fiscal year 2014. These previously were left as a sum total in the 2011 act. They appear to further reflect a shrinking military and a growing amount of spending in other things.

Voodoo Magic

Finally, the last bit of the recently passed American Taxpayer Relief Act of 2012 that I’ll cover deals with additional adjustments made to the limits done in a very interesting way.

Section 901(e) of the American Taxpayer Relief Act of 2012
(e) 2013 Sequester- On March 1, 2013, the President shall order a sequestration for fiscal year 2013 pursuant to section 251A of the Balanced Budget and Emergency Deficit Control Act of 1985, as amended by this section, pursuant to which, only for the purposes of the calculation in sections 251A(5)(A), 251A(6)(A), and 251A(7)(A), section 251(c)(2) shall be applied as if it read as follows:
 `(2) For fiscal year 2013--
 `(A) for the security category, $544,000,000,000 in budget authority; and
 `(B) for the nonsecurity category, $499,000,000,000 in budget authority;'.

A bit of voodoo going on here. The numbers above reflect the amount of spending that would be safe from being automatically cut in both the “nonsecurity category” and “security category”. The “security category” refers to all discretionary appropriations in budget function 050 (National Defense), whereas the “nonsecurity category” is every other category.

Even though the limits effective for 2013 now reflect what was amended by Section 251(c), this last amendment here instructs us to ignore those limits for several types of calculations. Instead of $684 billion for defense spending, $544 billion is now the limit. Regarding everything else, there is now a $499 billion limit instead of $361 billion. Seems to be a rather drastic reduction in the ratio between Defense to all other spending.

What to Take Away

One would do well to draw their own conclusions from the above data. For myself, I felt that my questions were, for the most part, answered, and that I gained the following insights:

  1. The automatic cuts were not averted under the original parameters of the 2011 act.
  2. The automatic cuts were averted, for now, simply by delaying the date at which they would occur.
  3. The amount of deficit reduction required in order to avoid the cuts for 2012 is now around $85,333,333,333, or around a 22% reduction in what it previously was.
  4. Given that the required amount of cuts has been reduced by 22% for this 2012, it does not seem that H.R.8 helps the overall deficit picture.
  5. The discretionary spending limits were amended to reflect the limits of last year in an attempt, I believe, to make up “lost ground” in the effort of reducing the deficit long term, although it would be preferable to have an official answer behind the adjustments as opposed to mere assumptions.
  6. The just mentioned spending limit adjustments, however, do not seem to actually reflect the effective limits, which are put in place by the magic of Section 901(e). The limits mentioned there seem to be the ones used in all the important calculations; I’m unsure exactly where they are not used.
  7. The limits mentioned in Section 901(e) are concerning somewhat both in the manner they’re introduced as well as both the large cut being made to defense spending and large allowance being granted to all other matters.

So, we’ve covered many things, and just in matters concerning the sequestration mechanism itself. I’m sure there are many other conclusions one could derive from the other portions of the law as well, but the strongest impression I’m left with is that incentivizing law makers to behave a certain way in order to avoid punishments meted out by the law, the very thing they malleate day to day, appears to be a pointless exercise.

 

I’ve been swimming in a sea of COM Interop lately. Some time ago, I wrote an article that had some important tidbits regarding the nature of Runtime Callable Wrappers, or RCWs.

I’d thought I’d bring to the surface one of the more important questions I answered in that article, specifically: What actions, when executed, incur an increment to an RCW’s reference count?

RCW Reference Counting Rules

When dealing with COM Interop, we find ourselves no longer in the safe confines provided by the .NET garbage collector. Sure, garbage collection will still occur eventually, however great care must be taken if our software is both making use of various COM types and expected to still operate stably.

If our code happens to be making use of (or being made use of by) something like a COM automation server, where perhaps our product is a minor act in a larger show, this becomes even more important.

The following is a listing of the actions that DO cause the reference count on RCWs to be incremented:

  1. Requesting a COM Object from Another COM Object.
    1. Whenever we seek treasures from COM-land, the reference counting gods take note and record it in their books. This applies to objects returned by both methods and properties.
    2. The following is several examples of the reference count being incremented due to the calling of methods and properties (we’ll be using Outlook’s object model for these examples):
      Explorer first = Application.ActiveExplorer();
      Explorer second = Application.ActiveExplorer();
      
      Inspectors inspectors = Application.Inspectors;
      
    3. If I put one of these Explorer instances through Marshal.ReleaseComObject, you should expect to get a return value of 1, indicating one reference yet remains. And, in regards to the Outlook Object Model, I know that this will be the case.
    4. However, this will not necessarily occur with any other type library. Some COM objects, depending on their own internal design, will appear to allocate new memory each time you access a property. When this is the case, the .NET runtime will think that it is dealing with a never-before-seen COM object, and will generate a new RCW for it.
      1. What actually is most likely happening under the hood with this is that the COM object, when returning some additional COM data, is actually returning a proxy to that data. When a type happens to do this, you will actually end up with multiple RCWs (indirectly) pointing to a single location in memory. When the RCW is released, it’ll clean up the proxy, but not until all of those are gone will the actual COM instance living in memory will go bub-bye.
      2. Unfortunately, this revelation has ill tidings for the use of Marshal.FinalReleaseComObject. You cannot be assured you are actually releasing a COM object simply by using this method unless you know that the Interop you’re dealing with behaves in a manner allowing such a thing. This should lend credence to the idea that vigilance must be maintained when dealing with COM Interop.
  2. Handling Events Originating from a COM Object
    1. If you have an event handling method subscribed to an event published by a COM object, then any COM type parameters will have their reference counts incremented when that handler is called.
    2. The following is an example of such an event handler:
      private void HandleBeforeFolderSwitch(object newFolder, ref bool cancel)
      {
        .
        .
        .
      }
      
    3. The above event handler is handles the Explorer.BeforeFolderSwitch event. Although the first parameter is (for some reason) an object type, it is actually a Folder type. Because your event handler (should) only be called from COM-land itself, you should treat these objects as newly created or recently reference-count-incremented RCWs.

The following is a listing of the actions that DO NOT cause the reference count on RCWs to be incremented (obviously not exhaustive):

  1. Casting a COM Instance to a Different Interface Type
    1. If an object implements several COM interfaces, and you cast it from one to another, your RCW will not increment its count.
    2. Indeed, you can be sure that when you do make the cast, that what you are getting back is the very same RCW, yet without any count having been incremented.
    3. Here’s an example of such a cast, this time using Redemption’s object model:
      RDOMail message = session.GetMessageFromID(entryId, storeId);
      RDOAppointmentItem appointment = (RDOAppointmentItem) message;
      
    4. If you look at both of the RDOMail and RDOAppointmentItem objects, you will see that they point to the same place in memory.
    5. If you pass one of the two items to a Marshal.ReleaseComObject call, you will also see that a 0 will be returned, indicating that there was only ever a reference count of one.
  2. Moving the COM Instance Around in your Managed Code
    1. This is an important one to know for the paranoid out there — and probably the one that most find themselves thinking about.
    2. Passing an instance of a COM type to another managed method does not increment its reference count.
    3. Storing a reference to the COM instance in a class member, stuffing it into a collection, etc., does not increment its reference count.
 

When working with .NET, it is lovely when everything we’re interacting with is rooted in .NET code; however, sometimes we need functionalities that can only be found in one or more COM types.

Something that should cross our minds when dealing with COM types is what the implications may be for deployment efforts. Depending on the specific COM type, its type library may not be a stock component of a Windows installation. If that’s the case (and sometimes, regardless of the case), we should all be prepared for the possibility of the following occurring when instantiating one of the COM types:

COMException: Class not registered (Exception from HRESULT: 0×80040154 (REGDB_E_CLASSNOTREG))

What Not to Do

Don’t let this exception get thrown without handling it (and I mean actually handling it, not just logging its occurrence and re-throwing it or some such).

Nothing is more confusing to your end users than COM-related errors (they tend to evoke enough confusion among developers themselves).

If you attempt to handle it and fail, then you will probably have to bubble up the exception (albeit without the HRESULT part; surely we can think of something more readable). But…at least you tried!

What to Do

First and foremost, and slightly off-topic, you can protect your product from almost ever encountering one of these errors by taking the proper procedures during its deployment. When installing .NET components that are dependent on COM types, your installer should register those COM types, unless for reasons legal or otherwise, you aren’t installing the COM libraries your product is dependent on.

Your installer should not register those COM types by making use of the self-registration capabilities exported by the COM libraries themselves (we will only be tapping into that when handling the above exception at run-time). Rather, we should know beforehand what entries need to be added into the registry, and have our installer manually do that. This is ancient wisdom shared among Those Who Know, and I shall leave it up to the reader to figure out how one goes about doing this.

If your .NET program is making use of one or more COM types, you should prepare for the possibility of the above exception occurring and go about solving it in the most direct manner: by registering the type!

You should do this even if you are protecting yourself during installation as described above, and even if the type originates from a library that comes standard on most machines. I don’t have any details for specific cases on hand, but I’ve seen Windows Update (or some other unknown externality) knock the registration out of some important (though I suppose, not critically important) COM types among end users of one product I deal with.

There’s two ways to go about registering the COM type:

  1. By creating a regsvr32.exe process with arguments supplied pointing it to the COM DLL file
  2. By doing what regsvr32.exe does with our own code

Clearly #2 is the winner here; I’d have to be completely out of other options before ever doing anything that requires the creation of a separate process.

If we’re going to do #2 though, we need to know a bit on how COM registration works. If you’ve ever developed your own COM library, you’re probably quite familiar with the process. But for everyone else, we’ll do a brief overview.

COM Registration in a Nutshell

There are many ways that the registration of COM type libraries occurs, however we will be focusing on a specific aspect of COM registration: self-registration.

Most COM modules exhibit the capability of registering themselves. This is what regsvr32.exe taps into when doing its thing.

A COM module that can self register exports a number of functions, one of them being DllRegisterServer. We need to import this function, and execute it. Therefore, we need to design a .NET type that can do the following:

  1. Load the COM module given a file name or path using LoadLibrary
  2. Import the exported DllRegisterServer function using GetProcAddress
  3. Execute the imported function.
  4. Handle the result returned from doing #3.

The rest of this article details the creation of such a class.

P/Invoke Imports

To accomplish what we’re looking to do, there are a number of unmanaged functions we need to make use of. Being the good developers that we are, we will be adding all these imports to our NativeMethods class.

First off, we will need some way to load the COM libraries, thus we need to import the LoadLibrary function:

/// <summary>
/// Loads the specified module into the address space of the calling process.
/// </summary>
/// <param name="fileName">The name of the module.</param>
/// <returns>If successful, a handle pointing to the loaded module; otherwise, an invalid handle.</returns>
[DllImport("kernel32", CharSet=CharSet.Unicode, SetLastError=true)]
internal static extern LibraryHandle LoadLibrary(string fileName);

 

LoadLibrary in NativeMethods.cs

Conversely, we’ll need a way to unload these modules, thus we need to import the FreeLibrary function:

/// <summary>
/// Frees the loaded dynamic-link library module and, if necessary, decrements its reference count.
/// </summary>
/// <param name="hModule">A handle to the loaded library module.</param>
/// <returns>If successful, a nonzero value; otherwise, zero.</returns>
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
[DllImport("kernel32", SetLastError=true)]
[return: MarshalAs(UnmanagedType.Bool)]
internal static extern bool FreeLibrary(IntPtr hModule);

FreeLibrary in NativeMethods.cs

Next up, we need to be able to import the exported DllRegisterServer function, thus we need to import the GetProcAddress function:

/// <summary>
/// Retrieves the address of an exported function or variable from the specified dynamic-link library.
/// </summary>
/// <param name="hModule">A handle to the loaded library module.</param>
/// <param name="lpProcName">The function name, variable name, or the function's ordinal value.</param>
/// <returns>If successful, the address of the exported function is returned; otherwise, a null pointer.</returns>
[DllImport("kernel32", CharSet=CharSet.Ansi, ExactSpelling=true, BestFitMapping=false, ThrowOnUnmappableChar=true, SetLastError=true)]
internal static extern IntPtr GetProcAddress(LibraryHandle hModule, [MarshalAs(UnmanagedType.LPStr)] string lpProcName);

GetProcAddress in NativeMethods.cs

You may have noticed a type that does not exist in your environment, LibraryHandle, which brings us to the next part of this article…

Level-0 Type: LibraryHandle

For our objective, we need to define a new level-0 type. If you don’t understand the meaning of the term level-0 type, then I suggest getting acquainted with the following article.

The level-0 type in particular that we need to define is one that will safely keep ahold of our loaded module handles. Also, as we should all know by now, these module handles need to be cleaned up by executing the FreeLibrary function.

The following is the definition of our type (and note, I am not adhering to the standard Microsoft uses where they prefix all these sort of types with the word Safe. Frankly, although it may make sense for Microsoft, I would find it ridiculous to be using it myself):

/// <summary>
/// Provides a level-0 type for library module handles.
/// </summary>
internal sealed class LibraryHandle : SafeHandleZeroOrMinusOneIsInvalid
{
    /// <summary>
    /// Initializes a new instance of the <see cref="LibraryHandle"/> class.
    /// </summary>
    private LibraryHandle()
        : base(true)
    { }

    /// <inheritdoc/>
    [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
    protected override bool ReleaseHandle()
    {
        return NativeMethods.FreeLibrary(handle);
    }
}

LibraryHandle.cs

Level-1 Type: COMServer

With everything in place, we can now implement our COM registration utility. Note that the following code makes use of some in-house types and extension methods I’ve made; I haven’t the heart to change them, and they should be easy enough to figure out a substitute for:

/// <summary>
/// Provides an in-process server that provides COM interface implementations to clients.
/// </summary>
/// <remarks>
/// <para>
/// This class, as it is currently implemented, mainly handles the registration responsibilities of a COM
/// server, and thusly, in a sense, functions as a level-1 type for COM library handles.
/// </para>
/// <para>
/// This class provides COM server functionalities, it is obviously not an actual COM server (i.e. this
/// class will never handle calls made to <c>QueryInterface</c>).
/// </para>
/// </remarks>
[DisposableObject]
public sealed class COMServer : IDisposable
{
    /// <summary>
    /// Instructs an in-process server to create its registry entries for all classes supported in a server
    /// module.
    /// </summary>
    /// <returns>The result of the operation.</returns>
    internal delegate ResultHandle DllRegisterServer();

    private readonly LibraryHandle _libraryHandle;

    /// <summary>
    /// Initializes a new instance of the <see cref="COMServer"/> class.
    /// </summary>
    /// <param name="libraryHandle">A handle to the COM library to provision.</param>
    private COMServer(LibraryHandle libraryHandle)
    {
        _libraryHandle = libraryHandle;
    }

    /// <summary>
    /// Creates an in-process server that will provide the COM interface implementations found in the
    /// specified COM library to the client.
    /// </summary>
    /// <param name="fileName">The name or path of the COM library module to load.</param>
    /// <returns>
    /// A <see cref="COMServer"/> instance representing a provisioning COM server of the library
    /// specified by <c>fileName</c>.
    /// </returns>
    public static COMServer Create([NotNullOrEmpty]string fileName)
    {
        LibraryHandle libraryHandle = NativeMethods.LoadLibrary(fileName);

        if (libraryHandle.IsInvalid)
        {
            int lastError = Marshal.GetLastWin32Error();

            switch(lastError)
            {
                case (int)ErrorCode.FileNotFound:
                    throw new FileNotFoundException(Strings.COMFileNotFound, fileName);
                case (int)ErrorCode.InvalidName:
                    throw new ArgumentException(Strings.COMInvalidSyntaxFormat.InvariantFormat(fileName),
                                                "fileName");
                case (int)ErrorCode.ModuleNotFound:
                    throw new ArgumentException(Strings.COMModuleNotFoundFormat.InvariantFormat(fileName),
                                                "fileName");
                default:
                    throw Marshal.GetExceptionForHR(Marshal.GetHRForLastWin32Error());
            }
        }

        return new COMServer(libraryHandle);
    }

    /// <summary>
    /// Instructs the server to create registry entries for all the classes it has loaded.
    /// </summary>
    public void Register()
    {
        DllRegisterServer registerServer = ImportRegistrationServer();

        ResultHandle hResult = registerServer();

        ValidateRegistration(hResult);
    }

    /// <inheritdoc/>
    public void Dispose()
    {
        if (!_libraryHandle.IsInvalid)
            _libraryHandle.Dispose();
    }

    /// <summary>
    /// Validates that registration succeeded, otherwise an exception is thrown.
    /// </summary>
    /// <param name="hResult">The result of the registration operation.</param>
    private static void ValidateRegistration(ResultHandle hResult)
    {
        if (hResult.Successful())
            return;

        string message;
        switch (hResult)
        {
            case ResultHandle.TypeLibraryRegistrationFailure:
                message = Strings.COMTypeLibRegistration;
                break;
            case ResultHandle.ObjectClassRegistrationFailure:
                message = Strings.COMObjectClassRegistration;
                break;
            default:
                message = Strings.COMInternalErrorFormat.InvariantFormat(hResult);
                break;
        }

        throw new Win32Exception(message, hResult.GetException());
    }

    /// <summary>
    /// Imports the exported registration procedure from the loaded COM library.
    /// </summary>
    /// <returns>
    /// The <see cref="DllRegisterServer"/> registration procedure exported by the loaded COM library.
    /// </returns>
    private DllRegisterServer ImportRegistrationServer()
    {
        IntPtr registrationAddress = NativeMethods.GetProcAddress(_libraryHandle, "DllRegisterServer");

        if (IntPtr.Zero == registrationAddress)
        {
            int lastError = Marshal.GetLastWin32Error();

            switch (lastError)
            {
                case (int)ErrorCode.ProcedureNotFound:
                    throw new InvalidOperationException(Strings.COMNoRegistration);
                default:
                    throw Marshal.GetExceptionForHR(Marshal.GetHRForLastWin32Error());
            }
        }

        return (DllRegisterServer) Marshal.GetDelegateForFunctionPointer(registrationAddress,
                                                                         typeof (DllRegisterServer));
    }
}

COMServer.cs

To make use of our new registration utility, we would do something like the following:

using (COMServer server = COMServer.Create("NameOfCOMLibrary.dll"))
{
    server.Register();
}

Use of COMServer

Enjoy!

 

DataTemplates are essential tools provided by WPF that allow us to describe the visual structure of  any type of object.

When declaring a DataTemplate, one typically is doing so with the intention of targeting a specific type (or valid subtype) of object. This is done by setting the DataType property of the DataTemplate to the Type of object we’re targeting.

Because the data templating system matches templates not only against instances of the specified type, but against derived types in the specified type’s hierarchy, DataTemplates are an easy way to visually shape large swathes of your own types of interest…

…As Long as the DataType Isn’t an Interface

WPF’s data templating system does not support the explicit targeting of interface types. If you mean to target a “base type” from which many other types descend, then you will have to make do with a standard object type; an abstract class is the closest thing to an interface that we’re allowed to target with DataTemplates.

This may be received by some as saddening news, as many legitimate object hierarchies exist out there whose members only share a specific interface in common. This limitation of WPF would then force an individual working with such an object hierarchy to create a base type (either to implement or wholly replace the interface) solely to get WPF to play nice with their model.

No one likes to be made to do something seemingly arbitrary and without true purpose; fortunately for us, there is a way we can add interface support to DataTemplates. Before we get into that, however, let’s think as to why WPF’s data templating system lacks support for interfaces.

The lack of interface support was the result of a call made by the early developers of the framework, and I believe it was the right one. When designing a frameworks like the greater .NET Framework or WPF, one must design with the intent of producing a framework which behaves in an expected and stable manner. Adding support for a typical object’s type hierarchy is a straight forward objective, as the type hierarchy will consist of a very well defined order of types that can easily be traversed.

There are too many potential caveats when one starts talking about adding support for interfaces, mainly due to the multiple inheritance angle of interfaces; you can easily get into situations where an object implements two interfaces which are also targeted by two separate DataTemplates.

Luckily for us, we aren’t designing a framework intended for mass consumption, so we can implement support with full awareness of what not to do in conjunction with its use.

Adding Interface Support with a DataTemplateSelector

By creating a DataTemplateSelector and using it, we can effectively define data templates which target interfaces as opposed to only standard objects.

Most of the time when we create a DataTemplateSelector, we design it so it can be equipped with a finite number of different DataTemplates to be doled out at run time based on some business-specific logic. This selector is a bit different, as this selector is meant to offer an alternative way of selecting templates in general; thus, this selector must use all resources found in the tree of the container as well as in the current application’s resources as its source for DataTemplates.

This sort of activity should certainly seem to you to be one that carries potentially negative consequences for performance, as combing through the entire set of loaded resources can indeed be an intensive task. In order to alleviate these concerns, the wisest course of action would be to attempt to use (as much as possible) WPF’s own faculties for searching through all relevant resource dictionaries.

There are a number of methods that allow for searching for specific resources across all, however these are internal. But we don’t need to worry about that, because, like many things in life, simplicity is the key, and we can make use of a very simple and familiar method to achieve our objectives: FindResource. Many people are probably used to just using this method to find a resource by its literal string name; instead of the literal name, we’ll be searching using a specific type of ResourceKey.

Below is the code for the template selector:

/// <summary>
/// Provides a data template selector which honors data templates targeting interfaces implemented by the
/// data context.
/// </summary>
public sealed class InterfaceTemplateSelector : DataTemplateSelector
{
    /// <inheritdoc/>
    public override DataTemplate SelectTemplate(object item, DependencyObject container)
    {
        FrameworkElement containerElement = container as FrameworkElement;

        if (null == item || null == containerElement)
            return base.SelectTemplate(item, container);

        Type itemType = item.GetType();

        IEnumerable<Type> dataTypes
            = Enumerable.Repeat(itemType, 1).Concat(itemType.GetInterfaces());

        DataTemplate template
            = dataTypes.Select(t => new DataTemplateKey(t))
                .Select(containerElement.TryFindResource)
                .OfType<DataTemplate>()
                .FirstOrDefault();

        return template ?? base.SelectTemplate(item, container);
    }
}

InterfaceTemplateSelector.cs

This will return the first DataTemplate encountered which targets one of the interfaces implemented by the item, with greater precedence given to templates that specifically target the concrete type of the item. If no templates are found, then the base selection logic will fire. The base selection logic should expand the search to include the rest of the object types in the item’s type hierarchy. Thus, the templates based on their targeted type returned, in order of precedence, is:

  1. Templates targeting the item’s specific type
  2. Templates targeting an interface implemented by the item
  3. Templates targeting other object types found in the item’s type hierarchy.

#2 and #3 may seem a bit unnatural, and that’s because they are. A superior solution would give greater precedence to object types found in the item’s type hierarchy if they too implement the same interfaces. If they do not, and the implementation of the interface is done somewhere in the hierarchy lower than the position of a given ancestor, then the template targeting that interface would take precedence.

Such efforts are largely unnecessary, however, as common sense would dictate that one should only use this selector isn’t meant to data template selection in its entirety; it should be used scenarios where interface is king.

 

Microsoft has recently released a preview version of their new Office 2013 product which adds support for their new touch screen platforms (while maintaining support for PCs).

Looks like anyone can grab a copy and try it out for themselves, if interested you can head on over to the official site. If you have a MSDN account, you can also find a download on there.

I decided to grab it myself to check out a few things, mainly:

  1. What the new interface looks like, and…
  2. If there were any breaking changes in their add-in hosting model and mechanisms.

Although I tend to only write  (hopefully) deep and useful software development articles, let’s make an exception to that pattern today with a little exposé on the new Outlook.

Inbox and Calendar: The New Look (Warning: High Contrast)…

As soon as Outlook loaded I somewhat felt like I was being assaulted with “white”. Man, there is a lot of white color on the new interface.

Any and all shades of color have been nuked and obliterated from the UI. Instead, we have some very solid colors, with blue being a prominent part of the new palette.

Outlook 2013's Ribbon

Outlook 2013's Ribbon

Yes, very white.

I actually like user interfaces with white backgrounds, however, the interfaces I make which use that color tend to be a bit simpler than the one Outlook uses. My immediate reaction to it is not a favorable one.

The inbox panel is also very white:

Outlook 2013's Inbox Panel

Outlook 2013's Inbox Panel

Sorry for the censorship. Regardless, I’d opt to have a bit more of a delineation drawn between the individual items; moreso here than in other places even.

The Navigation Pane has been updated as well; it is now on the very bottom, and it is now composed of words rather than pictures:

Outlook 2013's Navigation Bar

Outlook 2013's Navigation Bar

I do believe I like the calendar very much, however. Here’s a screenshot of everything with the calendar showing:

Outlook 2013's Calendar

Outlook 2013's Calendar

Very short summary: I think I like the look, but I can’t help but feel that the change to the interface initially invokes an overall feeling of shock when looking at it. A bit too white in areas, too; the blue bar at the bottom is kind of bugging me as well.

 Add-ins and Whatnot

All of my add-ins targeting previous version of Outlook loaded successful in Outlook 2013. Looks like they preserved the plugin model they had going before, although I’m sure some unannounced changes will surface eventually, as they always do with new releases of Outlook.

In the future, I will be writing articles covering any new development technologies I run across in Outlook 2013.

 

WPF affords developers the opportunity to create layouts that coincide greatly with how the look and behavior of a particular user interface was envisioned to be. The tricky part is, as always, knowing how to use the tools we’re given.

One common layout-related issue people run into is the layout of items displayed within an ItemsControl, specifically the space between the items. More often than not, items will not be displayed in a manner regarded as acceptable by the developer; some tweaking and customization is required to get what we want.

Sometimes we have very particular requirements set in how we want our final layouts to appear and behave. In this article, we’re going to look at an example featuring an ItemsControl, its children, and how we might be able to space it out so that it satisfies our goals.

I. The Scenario

In our application, we have the need to display a collection of customizable input fields at the bottom of our form. These input fields are defined by administrators managing the product, thus we are unaware of exactly how many of these input fields are going to end up appearing on our form.

Each input field consists of a self-describing label as well as an actual input element, such as a text box or a combo box. Although each field features two elements (label and input), it is collectively powered by a single piece of data. The following is an example of one of our input fields:

Example of a Displayed InputItem

Example of a Displayed InputItem

Appearances are important, so we have a number of requirements on how collections of these fields are laid out:

  1. Fields are laid out horizontally, but must wrap if they cannot fit on the screen
  2. Each field is spaced apart from the next and previous field in the collection by an amount dependent on the width of the window or root container. Fields should neither be too close nor too far apart, regardless of window size.
  3. Wrapped fields are aligned perfectly underneath their counterparts located above and/or below them (resulting in a grid-like layout).

If the window is large enough, the input fields should be laid out in only a single row, with proper spacing between each one, like so:

A Single Row of Input Fields.

A Single Row of Input Fields.

Notice how there is ample spacing between each item. Let’s see what we get when we constraint the width to the width shown above and add a few more fields:

Multiple Rows of Input Fields

Multiple Rows of Input Fields

See how everything is lined up in a grid-like fashion.

Now, we have no idea how many fields the customer will be adding, and we also don’t know how large their screens will be. So, our layout needs to be fluid in its shape in order to accommodate all these different possibilities. If we decrease the width of our window, we’ll have the following:

Multiple Rows of Input Items with Less Width Available

Multiple Rows of Input Items with Less Width Available

You’ll see how the number of columns shifted down to two. If you’re screen is wide enough, then you would get something like:

Multiple Rows of Input Items with Lots of Width Available

Multiple Rows of Input Items with Lots of Width Available

II. Pick Yer Poison (err, Panel)

Now that we have a good picture of what we want our layout to look like, we can proceed to choosing the particular panel to employ in our ItemControl‘s ItemsPanelTemplate.

Typically, when one wishes to achieve a “grid-like” layout, one would do well to make use of a Grid or UniformGrid. Certainly a UniformGrid might seem appropriate here, as is the typical choice when the goal is to evenly lay out items in a grid-like, evenly distributed fashion within an ItemsControl.

Unfortunately, that will not work for us here. We’re using WPF because it is dynamic dammit, and UniformGrid is a highly restrictive layout panel in that it requires the specification of the number of columns within its declaration.

Nay, a UniformGrid will not do! We need to adjust the number of columns based on the available width; in other words, we need to wrap appropriately. Therefore, the natural choice for our panel is the WrapPanel.

While the WrapPanel achieves our requirement of adjustment based on available width, it does not space out items and does not lay them out in a fashion that could be described anything close to “grid-like”. If we simply plop in a WrapPanel within our ItemsPanelTemplate, we’re going to get something like the following:

Ugly Input Field Layout

Ugly Input Field Layout

Dear Lord, that is hideous. But things are never pretty by accident (except organic life forms), and we are well on our way in getting the layout we want by making the WrapPanel our weapon of choice.

To start on the process of beautifying this pathetic creature, let’s talk briefly about the UI design behind each of the input fields.

III. Input Fields Be Stylin’

I’m going to leave up the exact design of these simple input fields as an exercise for the reader, for the most part.

In brief, the data powering each input field is responsible for indicating the type of exact input control which should be rendered, be it via a Boolean value or what have you.

A single Style is used which targets controls of the…Control variety. The name we’ll be using for this particular style is InputControlStyle. Based on our little indicator, it is wired to select the appropriate template. The various templates consists of a text block to display the descriptive text of the field, as well as a declaration of the particular type of input control which distinguishes it from the others.

So, while the actual design of the input field is arbitrary as far as we’re concerned, part of the solution for making our layout nice and pretty lies in their design. As was just witnessed in the previous section, using a WrapPanel gives us wrapping, but no grid alignment.

Luckily for us, children need not be wholly dependent on a common ancestral container in order to be collectively aligned in a grid-like manner. Instead, we can share size information between each of these fields so that we end up with just that.

We can share size information by taking advantage of the SharedSizeGroup feature made available by the DefinitionBase class.

Now, you’ve probably never heard of the DefinitionBase class before, and that’s just swell, but you’ve certainly toyed with its derivatives (e.g. ColumnDefinition, RowDefinition). These derivatives, however, only apply to Grid controls, thus we are going to need Grid controls somewhere in our setup. Going back to the template for each of the different kinds of input fields, we mentioned that each template will consists of two items. However, some sort of layout container will be needed to band the two elements together, and that’s where our Grid declarations will go.

In order to maintain our typical level of high standards and cleanliness, we’ll want to create a Style for these Grid controls so that all templates which provide a different type of input element can join the party. Assuming our magic Grid style is named InputGridStyle, an example template (one featuring the use of a text box) is as follows:

<ControlTemplate x:Key="TextBoxInputTemplate">
    <Grid Style="{DynamicResource InputGridStyle}">
          <TextBlock Style="{DynamicResource InputTextBlockStyle}"
                     Grid.Column="0"
                     />
          <TextBox Style="{DynamicResource InputTextBoxStyle}"
                   Grid.Column="1"
                   />
    </Grid>
</ControlTemplate>

Template for Combo Box Input Fields

The styles being applied to the text block and box are arbitrary and should obviously satisfy any aesthetic and data binding objectives.

Any other template need only adhere to using a Grid control with the above style applied in order to appear correctly in our ItemsControl.

Now that you have an unbelievable understanding of how these input templates are organized and work, we’ll move on to the real magic show: the common Grid style in use by each of these templates.

As was mentioned previously, we need to tap into the SharedSizeGroup feature in order to achieve our objectives. The SharedSizeGroup property accepts a string describing the name of the group that the particular Grid should enter into. All Grid controls entered into the same group will then share size information with each other as well as That Which Contains Them (a bit Lovecraftian, I know).

But wait, didn’t we say that the SharedSizeGroup property belongs to DefinitionBase-based class, like ColumnDefinition or RowDefinition? Some of you may see this alone as being a problem in our quest to create style defining the shared size group membership information.

For those of you not in the fold: styling ColumnDefinition/RowDefinition is problematic because, well…you can’t style the Grid control’s ColumnDefinitions and RowDefinitions properties. That’s because neither of those properties are actually dependency properties. What’s more, the Grid control is not actually a control, in that it does not derive from Control, it derives from Panel, which itself derives directly from FrameworkElement. This means that making use of a ControlTemplate is not a possibility either.

Well (maybe unfortunately for you), in my solution to our layout problem, I do actually style the grid’s column definitions, but I do so by making use of an attached property that lets me do so. Creating such an attached property is outside the scope of this article, but you should be able to find plenty of examples online that will guide you in that task. If all else fails, just drop the style altogether and simply re-declare duplicate ColumnDefinitions in all of your templates, and pray no one ever makes a spelling error in the name.

Here is the style of our Grid common to all the templates (the “lib” namespace is a fictitiously named one that represents a custom UI framework where I keep such things as the attached properties we’re going to be using):

<Style x:Key="InputGridStyle" TargetType="{x:Type Grid}">
    <Setter Property="Margin" Value="0,0,0,5"/>
    <Setter Property="lib:GridProperties.ColumnDefinitions">
        <Setter.Value>
            <lib:ColumnDefinitionCollection>
                <ColumnDefinition SharedSizeGroup="InputTextGroup"/>
                <ColumnDefinition SharedSizeGroup="InputValueGroup"/>
            </lib:ColumnDefinitionCollection>
        </Setter.Value>
    </Setter>
</Style>

Style Used by Input Field Grids

Note: Although WPF ships with a ColumnDefinitionCollection type (which is where ColumnDefinition instances get stored), and even though it is public, it has no default constructor, thus we cannot make use of it in an XAML declaration. You’ll need to make one your self, all it needs to do is implement an existing collection class targeting the ColumnDefinition type for its items (ObservableCollection<ColumnDefinition>or whatnot).

That takes care of our input item styles.

IV. Turn on Size Sharing

Now that our items are set up to share size information, we need to enable size sharing on the containing WrapPanel itself. We can do this simply by setting the Grid.IsSharedSizeScope attached property to true in the WrapPanel declaration.

Taking everything we’ve done, we should have the following declaration for our ItemsControl this far:

<ItemsControl ItemsSource="{Binding YourData, UpdateSourceTrigger=PropertyChanged}">
    <ItemsControl.ItemsPanel>
        <ItemsPanelTemplate>
            <WrapPanel Grid.IsSharedSizeScope="True"
                       Orientation="Horizontal"
                       HorizontalAlignment="Center"
                       />
        </ItemsPanelTemplate>
    </ItemsControl.ItemsPanel>
    <ItemsControl.ItemTemplate>
        <DataTemplate>
            <ContentControl Style="{DynamicResource InputControlStyle}" />
        </DataTemplate>
    </ItemsControl.ItemTemplate>
</ItemsControl>

Our ItemsControl Declaration Thus Far

OK! Great, let’s see what our ItemsControl looks like now (for our example, we only have enough width available for there to be two columns):

Better Looking Input Field Rows

Better Looking Input Field Rows

Looking much better! But…hmm…they are still a bit scrunched together. Remember that the x-Dimension of a WrapPanel is constrained to its content, much like the StackPanel. There is no feature made available by the WrapPanel class which will space things out nicely for us, instead we’re going to have to space our items out ourselves by adding margins to them.

So, alright…things look a bit scrunched starting where the first control ends and the beginning of the next control, and so on. Let’s then add a margin to the right of each item by amending our ItemTemplate with a Margin attribute like so:

<ContentControl Style="{DynamicResource InputControlStyle}"
                Margin="0,0,0,25"
                />

Adding a Margin to Our ItemTemplate Declaration

Alright, not too hard. Let’s look at it now:

Two Columns of Input Fields Looking Mighty Fine

Two Columns of Input Fields Looking Mighty Fine

Alright. Sweetness! If we give our form some more width, it’ll fill up three columns as was shown in some of the earlier screenshots in this article, if we remove even more width, we’ll skinny on down to a single row even:

One Column of Some Mighty Fine Input Fields

One Column of Some Mighty Fine Input Fields

Looks like our work here is done.

Good journeys, all.

 

Resource mailboxes play host to a number of room resource specific settings and policies, such as the window (in days) in which you can “book” the resource, as well as a many-layered system of policies and permissions which affect who may use them as well as their experience in doing so. Your application may have a need to read and synchronize with these settings.

Retrieving these settings using PowerShell is straightforward, however that doesn’t stop the topic from being covered incessantly by others on the Internet. Although we can call PowerShell cmdlets from code, the only real solution here is to retrieve the settings using the one and only proper and direct interface to Exchange, i.e. MAPI. To that, however, it helps to know how and where these settings actually get stored in Exchange.

This article examines how these room resource specific settings are actually structured and stored in Exchange, and how you can go about retrieving them. The substance of this article is a result of research (necessitated by the lack of official documentation on the subject) into the inner-workings of Exchange in regards to how policies affecting resource mailboxes are organized.

1. Meet the Resource Settings

Before we get into the nitty-gritty, it probably would help to briefly go over exactly what I mean when I refer to “resource mailbox settings”. Using the Exchange Management Console, we can easily view them by navigating to Recipient Configuration -> Mailbox, right clicking on a resource mailbox, and clicking on Properties. Many tabs will be presented to you; however, for the purposes of this article, we will only be concerning ourselves with five of them.

The first is the Resource General tab, which exposes capacity and custom property settings for the resource mailbox:

Resource General via the EMC

Resource General

Next, we have the prominent Resource Policy tab, which features the most numerous and interesting settings:

Resource Policy via the EMC

Resource Policy

Continuing on, we have the Resource Information tab, which is less about “information” and more about binary settings in relation to the resource mailbox:

Resource Information via the EMC

Resource Information

Ever persistent, we move next to the Resource In-Policy Requests tab, which itself does well to explain what it is that it does:

Resource In-Policy Requests via the EMC

Resource In-Policy Requests

Our journey coming to an end, we arrive at the last stop, the Resource Out-Of-Policy Requests tab:

Resource Out-Of-Policy Requests via the EMC

Resource Out-Of-Policy Requests

The settings that were shown in the preceding images are, collectively, what I will be referring to as the resource settings of a resource mailbox. It is our intention to learn how we might be able to exploit these settings during interactions with an Exchange server which operate on a much more basic level than those that take place through the use of the Exchange Management Console.

There are some very large companies out there which make use of resource settings in order to control the access to and use of various company assets; if your product operates in a space which leverages these assets in a similar manner, it may prove to be desirable to be able to synchronize with these settings so their methods of administration need not change so much.

Firstly, we must first determine if what we wish for is indeed possible. As we can see above, we are looking at the settings via the EMC. We also know that we can manipulate these same resource settings from a PowerShell interface.

An example that exists outside of the Exchange family of software is Microsoft’s own Forefront Identity Manager, which also mirrors the interface and settings exposed by the EMC in a similarly looking way.

Taking all of that into account, then, there must be a way for us to get at those values as well.

As an aside: the purpose of this article is not to cover what each and every resource setting does, as there are a billion articles online that do that. Therefore, I’m assuming that the reader is either intimate in regards to the workings of each setting, or that they at least find them to be obvious in their purpose.

2. Where the Resource Settings Call Home

Although the resource settings shown above all seem to be associated with a resource mailbox in some way, we still really have no idea where they actually get stored. Certainly none of them can be categorized as the typical types of data one expects to encounter when aimlessly trawling through a MAPI store, folder, etc.

If it is our intent to be able to retrieve a resource mailbox’s resource settings, then one of the central questions to be answered is where these resource settings are actually stored in Exchange, as well as what the nature and structure of that storage is.

So, where do the resource settings for a particular resource mailbox come from? Well, it may (or may not, depending on your level of jadedness) surprise you to know that these resource settings do not come from any single location, but rather a couple of sources; namely, the address entry associated with and calendar folder contained by the resource mailbox.

Knowing this, let’s group all of the resource settings known to us into two separate enumerations based on each setting’s origination.

2.1 Address Entry Based Settings
  1. Resource capacity
  2. Resource custom properties
  3. Resource type
  4. Resource delegates (although information may be incomplete…see section on Address Entry Based Settings for more information)
2.2 Calendar Folder Based Settings
  1. Enable Resource Booking Attendant
  2. Delete attachments
  3. Delete comments
  4. Delete subject
  5. Delete non-calendar items
  6. Add organizer’s name to subject
  7. Remove private flag on accepted meetings
  8. Send organizer info upon declining of request due to conflicts
  9. Add additional text to response
  10. Additional text
  11. Mark pending requests as tentative
  12. Allow conflicts
  13. Allow recurring meetings
  14. Schedule only during working hours
  15. Enforce scheduling horizon (booking window)
  16. Booking window
  17. Maximum duration
  18. Maximum number of conflicts
  19. Allowed conflict percentage
  20. Forward meeting requests to delegates
  21. In-policy requests from all users automatically approved
  22. List of users whose in-policy requests are automatically approved
  23. In-policy requests from all users can be approved
  24. List of users whose in-policy requests can be approved
  25. Out-of-policy requests from all users can be approved
  26. List of users whose out-of-policy requests can be approved
  27. Resource delegates (most complete set of information, however more work is required in order to get at the data…see the section on Calendar Folder Based Settings for more information)

And there we have a complete listing of all the public resource settings as well as where we might find them. Cool, but simply knowing where they are isn’t going to cut it for us. We need to know exactly where and how each setting is stored in its respective container if we wish to be able to access their value.

The address entry based settings are simply stored as separate, run-of-the-mill, MAPI properties (albeit “non-standard” MAPI properties) directly on the address entry. The calendar folder based settings are a whole other story.

Because it is the simpler of the two, let’s get into the details of the various address entry based settings first.

3. Address Entry Based Settings

Although it would seem to make more sense if all the resource settings were stored in a single location, in fact it is their use which dictates where they can be found. The presence of a resource’s capacity and custom properties settings on its address entry seems to occur merely to accommodate the Room Finder feature one can find in Outlook.

The set of criteria one may use in order to refine resource searches is a direct reflection of what resource settings are exposed on each of the resource mailbox’s associated address entries. This makes sense, because when we’re using the Room Finder, we are searching across address entries in an address list, not message stores.

3.1 Resource Capacity

The capacity of a room resource can be found by accessing the PR_EMS_AB_ROOM_CAPACITY MAPI property located on its associated address entry.

This property is of the type PT_LONG, its canonical name is PidTagAddressBookRoomCapacity, and it has a property tag of 0×08070003.

3.2 Resource Custom Properties

This fun little creature is a bit more complicated than the resource’s capacity property, and it can actually be found in a couple of places on the address entry, with one of the places being more of a reference to the resource’s custom properties as opposed to the actual entity that defines them.

The MAPI property which defines the resource’s custom properties has neither a name nor a canonical name. It certainly has a property tag, however, and that property tag is 0x0806101F.

This property is of the type PT_MV_UNICODE, which basically means that we are dealing with an array of strings, with the property being “multivalued” and all.

This array will be composed of an entry for each custom property as well as an additional entry at the end of the array which reflects the name of the schema which the preceding custom properties fall under. Because the name of the schema that a custom property falls under must match the type of resource it is meant to be assigned to, the name of the schema should typically always be Room in the case of the resource mailbox being of the room type.

For example, if we use the resource mailbox pictured in the screenshots shown in the previous section, we would get an array of the following composition if we were to access its custom properties:

0:AV;1:TV;2:Room
3.3 Resource Type

An important little fact in regards to a resource mailbox is its type. Is it a piece of equipment, or a room!?! Although you can typically rely on room resources being located underneath the All Rooms address list and equipment resources being located underneath the All Equipment address list (there is one of those…I think?), that’s taking way too big of a leap of faith to be considered a sane approach.

The MAPI property housing the resource’s type, much like the property housing its custom properties, has neither a name nor a canonical name. Its property tag, however, is known to us, and it is 0x0808101F.

It too is of the PT_MV_UNICODE type, although I can’t for the life of me figure out why. It should always contain only a single entry which uses a format of ResourceType:[ResourceTypeName] (drop the brackets, obviously!).

In the case of a room resource mailbox, the value of this property should always be ResourceType:Room.

3.4 Resource Delegates

The users assigned as resource delegates to the resource mailbox can indeed be found in the address entry associated with the resource mailbox, however it is not guaranteed to actually be a complete listing of those users.

The EMC, for example, most assuredly does not get the list of users it uses to populate the Resource Delegate list view from the address entry. However, I bring it up because it is a very simple way to get at those delegates, and it has built-in support when using MAPI client interfaces such as Redemption.

For a better way to get at the list of resource delegates, see the Calendar Folder Based Settings section coming up in a little bit.

The delegates for a resource can be found by accessing the PR_EMS_AB_PUBLIC_DELEGATES located on the resource’s associated address entry. Note that it says PUBLIC in the name — there is a good reason for that, as some delegates are indeed “private”.

It is of the type PtypEmbeddedTable, its canonical name is PidTagAddressBookPublicDelegates, and its property tag is 0x8015000D.

If you are using Redemption, you will find this property in the Delegates property exposed by the relevant RDOAddressEntry object instance.

4. Calendar Folder Based Settings

Now that we’ve gone over the relatively few address entry based settings that exist, it is time to move on to the fun stuff.

The majority of the resource settings can actually be found inside its Calendar folder. They do not, however, exist as properties on the calendar MAPI folder itself; rather, they can be found in a special message located in the calendar’s folder-associated information table

The particular message which houses the resource settings is one that can be found in the associated contents table of any calendar folder, regardless of the type of message store that contains it. This special message can be identified by its message class, which is IPM.Configuration.Calendar. Only one will exist in a folder’s associated contents table, making the use of the message class as a filtering agent a valid action.

That should be all the information you need in order to find this file. If you happen to be using Redemption as your MAPI client, however, you can get at the messages stored in the calendar’s associated contents table by accessing the HiddenItems property exposed by the relevant RDOFolder object instance.

Looking at the IPM.Configuration.Calendar message using MFCMAPI with the message of interest highlighted.

Looking at the IPM.Configuration.Calendar message using MFCMAPI

Once you find this message, you’ll see that it is like any other message, in that it has a plethora of MAPI properties. So what are the properties where we can find the values for our resource settings?

Well, if all you do is simply look over all the available properties by their name, you’re going to end up empty handed.

The resource settings, both fortunately and unfortunately, happen to all be stored in a single MAPI property. You may feel that this approach to the storage of the resource settings is a bit strange and out of place in comparison to how data is typically organized in a MAPI store, and I’d have to agree with you. But this is reality, and idle dreaming won’t do well to get us very far in what we wish to do.

The resource settings are stored in a dictionary-like structure which is defined in XML markup. The schema that it uses is…very interesting. I normally like interesting things, but all that serves to do here is make the process of deserializing the data more problematic. We’ll talk a bit more about that in a second; let’s discuss where this data lives first.

You can find the resource settings in the calendar configuration message by accessing the MAPI property named PR_ROAMING_DICTIONARY.

This is a PT_BINARY typed property, its canonical name is PidTagRoamingDictionary, and its property tag is 0x7C070102.

If we consult Exchange Server Protocol documentation, we learn that this particular property is one that contains an entity known as a dictionarystream. This is basically a binary stream which contains a XML document (UTF-8 encoding). The schema that is used for the XML dictionary is as follows:

<?xml version="1.0" encoding="utf-8"?>
<xs:schema targetNamespace="dictionary.xsd"
           xmlns="dictionary.xsd"
           xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="UserConfiguration">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="Info">
          <xs:complexType>
            <xs:sequence />
            <xs:attribute name="version"
                          type="VersionString">
            </xs:attribute>
          </xs:complexType>
        </xs:element>
        <xs:element name="Data">
          <xs:complexType>
            <xs:sequence>
              <xs:element name="e"
                          minOccurs="0"
                          maxOccurs="unbounded"
                          type="EntryType">
              </xs:element>
            </xs:sequence>
          </xs:complexType>
          <xs:unique name="uniqueKey">
            <xs:selector xpath="e" />
            <xs:field xpath="@k" />
          </xs:unique>
        </xs:element>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:simpleType name="VersionString">
    <xs:restriction base="xs:string">
      <xs:pattern value=".+\.\d+" />
    </xs:restriction>
  </xs:simpleType>
  <xs:complexType name="EntryType">
    <xs:sequence />
    <xs:attribute name="k"
                  type="ValueString" />
    <xs:attribute name="v"
                  type="ValueString" />
  </xs:complexType>
  <xs:simpleType name="ValueString">

    <xs:restriction>
      <xs:simpleType>
        <xs:restriction base="xs:string">
          <xs:pattern value="\d+\-.*" />
        </xs:restriction>
      </xs:simpleType>
    </xs:restriction>
  </xs:simpleType>
</xs:schema>

The place where our resource settings live is within the <Data> element, which is where all of the dictionary name-value pairs go. Now, these do look a bit different from what we’re used to from a .NET perspective, but, nevertheless, there is order to be found amongst the madness here.

Each name-value pair consists of an <e> element and has two attributes: k and v. The k attribute basically acts as the key to the entry, thus it is where you can find the name of the particular setting that the name-value pair represents. The v attribute is obviously then the value portion. That is simple enough, however the complexities arrive in how the value is actually expressed with the v attribute.

The value assigned to the v attribute is actually composed of several parts, each of which is delimited from the next by a hyphen. This is known as the ValueString, and it takes on the following shape:

<data type>-<string encoded value>

…at least according to the documentation. In reality, it is typically more complicated than this.

Also according to the documentation, the data type used must be one of three values: 3 (Boolean), 9 (32-bit signed integer), or 18 (string). And once again, the documentation is not exactly on par with reality, as additional data types are indeed used.

In order to cement our understanding of these calendar based resource settings, let’s look at a real dictionarystream coming from a resource mailbox’s IPM.Configuration.Calendar message. Note that this example does not necessarily reflect the values of the settings as per the screenshots shown at the beginning of this article. There is an important reason for this: if a setting is set at its default value, then it will not appear in the dictionary. Here it is:

<?xml version="1.0" encoding="utf-8"?>
<UserConfiguration>
  <Info version="Exchange.12" />
  <Data>
    <e k="18-ForwardRequestsToDelegates" v="3-False" />
    <e k="18-AddOrganizerToSubject" v="3-False" />
    <e k="18-ConflictPercentageAllowed" v="9-12" />
    <e k="18-DeleteComments" v="3-False" />
    <e k="18-DeleteSubject" v="3-False" />
    <e k="18-AutomateProcessing" v="9-2" />
    <e k="18-MaximumDurationInMinutes" v="9-1441" />
    <e k="18-AllowConflicts" v="3-False" />
    <e k="18-RequestInPolicy" v="1-18-1-36-6f33f1c8-ba68-4852-8ba3-a727920a5f39" />
    <e k="18-RemovePrivateProperty" v="3-False" />
    <e k="18-ScheduleOnlyDuringWorkHours" v="3-False" />
    <e k="18-EnforceSchedulingHorizon" v="3-False" />
    <e k="18-TentativePendingApproval" v="3-False" />
    <e k="18-OrganizerInfo" v="3-False" />
    <e k="18-AllRequestOutOfPolicy" v="3-False" />
    <e k="18-BookInPolicy" v="1-18-2-36-b867b454-75c4-480f-aba0-47bd5b11d5b7-36-7461ec96-5929-4905-8f73-a4e0abb1ff8e" />
    <e k="18-MaximumConflictInstances" v="9-5" />
    <e k="18-AllBookInPolicy" v="3-False" />
    <e k="18-AddAdditionalResponse" v="3-True" />
    <e k="18-AdditionalResponse" v="18-Additional Test" />
    <e k="18-DeleteAttachments" v="3-False" />
    <e k="18-DeleteNonCalendarItems" v="3-False" />
    <e k="18-BookingWindowInDays" v="9-181" />
    <e k="18-RequestOutOfPolicy" v="1-18-1-36-b867b454-75c4-480f-aba0-47bd5b11d5b7" />
    <e k="18-AllRequestInPolicy" v="3-False" />
    <e k="18-AllowRecurringMeetings" v="3-False" />
  </Data>
</UserConfiguration>

As we can see above, there exists a name-value pair for every one of the resource settings we previously declared as being calendar folder based. Most of these properties look fairly normal and easy to digest, however there are a few that live outside the scope of official documentation. Particularly, there appear to be a few settings that have a ValueString assigned to their v attribute that contains a data type not found in the official documentation’s table of possible data types.

An example of this is the value assigned to the RequestInPolicy setting.

If we follow the standard format of a ValueString, then the data type being used here is 1. Unfortunately, there is no data type in official documentation that uses this identifier. Fortunately for us, we have brains, and we can quickly deduce that based on the setting itself (which receives a list of users that can make in-policy requests) that the type of data must be some sort of type of collection of values.

OK, so we’re past the data type: this is a list of some kind. Let’s continue down the ValueString. We’ll quickly notice that, also unlike what’s documented, this particular ValueString consists of many more parts than just a data type and actual value. Let’s go through each one and then come up with our own kind of format used for this data type.

Right after the standard data type portion of the value type, we see an 18. This represents something I’ll call the sub-data type, or rather, the type of data for all the members in the collection. This particular sub-data type indicates that the collection is one that contains all string values.

After the sub-data type indicator, we can see a 1. While what this means is not obvious from the current setting we’re looking at, if you look at another setting with more than one user assigned, it quickly becomes apparent: this is an indicator of the number of items in our collection. For this particular setting, the 1 indicates that there is only one string to be found in the collection.

Next, we see a 36. What this represents is the length of the string encoded value. So, in our example, because the string encoded value is 36 characters, we have a 36 for the size indicator. It is important to note that this indicator indicates the length of the string encoded value, not necessarily the actual value. So, if we theoretically happened to have a collection comprised of 32-bit signed integer data, then the number 942 (were it to be a member of the collection) would yield a size indicator of 3.

Going onward, we’ll finally arrive at the first (and only) value for the collection. Perhaps confusingly at first, this value is comprised of multiple hyphens; this is why the length indicator is crucial. There are hyphens in the value because what this value is, is a GUID.

So, with all of that figured out, let’s define the format for ValueStrings using a collection data type:

1-<sub-data type>-<n>{[-<value length>-<string encoded value>](0) ... [-<value length>-<string encoded value>](n-1)}

…with n representing the number of items in the collection.

And there we have it. But not quite: we still don’t know what the heck the GUID values found in these collections even are.

Well, I’ll tell you what they are: each GUID value is the value of the objectGUID property of a user object in Active Directory. Thus, in order to ascertain the identity of a user from these settings, you would need to first bind to their user object using the objectGUID, and then go from there.

This is a sensible approach to storing a reference to a user’s identity; a user’s objectGUID property will never change. In fact, I wish more of the data stored in Exchange took this approach — too often the only reference to the user is the user’s display name.

Still, it will add a slight burden onto you in order to get any user out of it.

4.1 How Am I Supposed to Read This Crap?

Yes, yes…we understand now the structure of the dictionarystream that contains the resource settings, however we probably want to be able to somehow deserialize the raw XML data into some nice and friendly objects.

Well, given the nature of the schema, it would be foolish to expect, as far as .NET XML serialization is concerned, that deserializing this data would be something automatically supported. It has hopefully become more and more obvious to you from reading this article that these resource settings weren’t designed with an expectation that people outside Microsoft would be reading them directly from their points of storage.

One might wonder why it is stored in an XML dictionary at all. If it somehow makes the data more easily “shippable” when responding to Exchange Web Services requests, then that would not surprise me, but I have not examined that particular aspect, so I cannot say.

If you want to read this type of dictionary, there is no other way than to create a class which implements the IXmlSerializable interface. This is, of course, a very non-trivial thing to do. If you need some guidance as to how to do this, then you can refer to Microsoft’s own implementation, which is the internal ConfigurationDictionary class found in the Microsoft.Exchange.Data.Storage assembly.

I’d provide one for you, but frankly, that’s when I would start charging money.

4.2 Resource Delegates

Back in the section on address entry based settings, I mentioned that the resource delegates are not actually read from the address entry by the EMC, and is not the best place to retrieve that information. This is more of a general fact than one that is specific to resource mailboxes.

The reason why the address entry may not be complete in the information it offers, is because it is possible to set up a user so that they are hidden from all address lists. You can find this option on the General tab in the properties window of the user’s mailbox, as pictured below:

User is Hidden and Will Not Be Listed Publicly as a Delegate

If this check box is checked, then the user will not be publicly listed as a delegate, even though it is a delegate.

The best way to get all the delegates for a resource mailbox is by looking at the Access Control List for the resource mailbox’s calendar folder. I’m fairly convinced that this is what the EMC does when it displays the resource delegates for a particular mailbox, and it does this by referring to the calendar folder’s PR_NT_SECURITY_DESCRIPTOR MAPI property. If you are using Redemption, you can do this by accessing the ACL property exposed by the relevant RDOFolder object instance.

However, of course, other permissions which are intended for anything but delegate access may very well be defined in the ACL, so you will have to determine who’s a delegate and who isn’t on an entry by entry basis. I do not know if it is possible for an Exchange administrator to be able to change what the default delegate roles are, but if that is found to be not possible, then you can simply check whether or not the entry has an access mask value of 0x1208AB, which I believe is what gets doled out when someone is assigned delegate permissions (you will want to research that out, however).

4.3 Dictionary Keys for Resource Settings

One last bit of knowledge that may help is to know which name-value pairs in the dictionary represent which resource settings. For the majority of these settings, this is fairly obvious; however, I will provide a table for you because I’m a nice guy.

Resource Setting

Dictionary Key

Enable Resource Booking Attendant AutomateProcessing
Delete attachments DeleteAttachments
Delete comments DeleteComments
Delete subject DeleteSubject
Delete non-calendar items DeleteNonCalendarItems
Add organizer’s name to subject AddOrganizerToSubject
Remove private flag on accepted meetings RemovePrivateProperty
Send organizer info upon declining of request due to conflicts OrganizerInfo
Add additional text to response AddAdditionalResponse
Additional text AdditionalResponse
Mark pending requests as tentative TentativePendingApproval
Allow conflicts AllowConflicts
Allow recurring meetings AllowRecurringMeetings
Schedule only during working hours ScheduleOnlyDuringWorkHours
Enforce scheduling horizon (booking window) EnforceSchedulingHorizon
Booking window BookingWindowInDays
Maximum duration MaximumDurationInMinutes
Maximum number of conflicts MaximumConflictInstances
Allowed conflict percentage ConflictPercentageAllowed
Forward meeting requests to delegates ForwardRequestsToDelegates
In-policy requests from all users automatically approved AllBookInPolicy
List of users whose in-policy requests are automatically approved BookInPolicy
In-policy requests from all users can be approved AllRequestInPolicy
List of users whose in-policy requests can be approved RequestInPolicy
Out-of-policy requests from all users can be approved AllRequestOutOfPolicy
List of users whose out-of-policy requests can be approved RequestOutOfPolicy

Post a comment if you have any questions or anything interesting to say.

© 2012-2013 Matt Weber. All Rights Reserved. Terms of Use.