Matt Weber

I'm the founder of Bad Echo LLC, which offers consulting services to clients who need an expert in C#, WPF, Outlook, and other advanced .NET related areas. I enjoy well-designed code, independent thought, and the application of rationality in general. You can reach me at


With over 10 years experience in a lead .NET developer role during which I architected and implemented a number of successful large-scale products, I am happy to announce I am now offering my services to clients needing an expert in C#, WPF, Outlook, and other advanced .NET related areas. See my resume for more information on my capabilities.

Please send any requests for my services or other opportunities you feel may interest me to


Office add-ins typically feature a specific type that is truly the star of the show, with its responsibilities including things such as facilitating the connection to the target product, extending the Office ribbon, and more. We typically refer to this type as the Connect class, and given the amount of responsibilities it can potentially acquire, it can easily turn into a very large, bulky, and perhaps slightly Lovecraftian entity.

When such a thing becomes reality for your add-in, you may start to wonder if it is at all possible to break up these responsibilities into separate types; indeed, separation is possible, and this article will go into how. Understanding the solution however, of course requires understanding the actual problem at hand.

Before we start, however, I would note that if you use VSTO to develop your add-in, this article most likely does not apply to you. I can’t be sure, as I stay very far away from the thing. If interfaces such as IDTExtensibility2 and IRibbonExtensibility appear alien to you, then I can assure you that reading any more after this point would only serve to consume your limited time.

The Problem

The Parties Involved

An add-in’s implementation of the IDTExtensibility2 interface serves as its connection to the Office product it targets. This is why we typically use Connect as the name for the implementing class. Without it, our add-in has no way of being loaded/unloaded by the host Office application.

If we want our add-in to improve upon Office’s ribbon, then we must implement the IRibbonExtensibility automation interface. This is used by Office in order to load the custom ribbon XML markup from the add-in and execute callbacks via the use of IDispatch::GetIDsOfNames and IDispatch::Invoke. Because it is Office that makes use of these methods, then it is manifest that Office be able to locate its implementation, and the method in which it does so may be unclear to those coming from a strictly .NET/managed background.

In order to allow for our add-in to extend the ribbon, the usual course of action for those of us developing in a managed environment is to simply add a IRibbonExtensibility implementation to our Connect class. Office will pick up on this just fine (although many of us at the time may not be very sure as to how it does so) resulting in our envisioned custom ribbon making it onto the live application instance.

Let’s not forget that Office has a few other (albeit less common) extensibility points we may be interested in, such as the ability to provide custom take panes through the use of the ICustomTaskPaneConsumer interface. The prime candidate for implementation of such interfaces will yet again most likely need to be our trusty Connect class.

A Growing Manifestation

We like to keep our code clean and compartmentalized, with each module or class responsible for as few things as possible. This desire will quickly become an impossible dream to many when we start dabbling in areas such as extension of the Office ribbon.

If we’re dealing with anything more than the simplest of add-ins, we’re going to start to see a very large number of callbacks being added to our Connect class. In the end, our Connect class will be very large, with potentially lots of responsibilities.

So it may come to be that you want to separate your add-in entry point code (IDTExtensibility2) from your ribbon extension code (IRibbonExtensibility) into two separate entities. If you know little of the mechanisms in use by Office in order to tap into your IRibbonExtensibility implementation, this may seem nigh impossible at first.

If we do some blind digging around how Outlook seems to be finding and interacting with our add-in, we can see that the scant registration data pertaining to our add-in makes no mention of its ribbon extending capabilities; in most cases, the only thing related to our add-in that is registered with the system is our Connect class.

Office is a strictly unmanaged beast. Consequently, it certainly doesn’t make use of .NET Reflection APIs in order to figure out if your add-in’s entry point implements IRibbonExtensibility. So, how does Office know if your add-in extends its ribbon?

COM-mon Knowledge, Baby

Hopefully you wouldn’t be surprised if I told you that the add-in you’ve been developing is a COM add-in. Indeed, your add-in is an in-process COM server that implements the IDTExtensibility2 interface as described in Msaddndr.dll (which just rolls off the tongue, I know).

So, hopefully you don’t feel it to be a very large leap of faith to go a bit further here and extrapolate that Office is making use of the basic set of tools native to COM in order to do all of its digging (I’d think the mention of the IDispatch interface earlier might have been a bit of a giveaway).

Your Connect class (well, actually the IDTExtensibility2 implementation found in your unmanaged shim, you are using a shim right?) is actually registered as an in-process server for your add-in via an InProcServer32 registry key. Office will get an interface pointer to the IUnknown of the object that implements IDTExtensibility2, and then use that pointer to QueryInterface for any subsequent interfaces (such as IRibbonExtensibility, etc.).

That’s all we need to know.

The Solution

If we want the responsibilities of extending Office’s ribbon delegated to another class, we need to return a pointer to that class when queried. Before we get into how we do that, let’s put what we’re working with on the table first.

We’ll be working with two managed classes: one which provides the point of connection to Office (Connect) and one which extends the Office ribbon (Ribbon).

public sealed class Connect: IDTExtensibility2
public sealed class Ribbon : IRibbonExtensibility

With these two classes, we can separate our connection logic from our ribbon logic. The tough part is getting Office to make use of our Ribbon class; as was stated previously, by default, the COM shim wizards require the Connect class to implement c>IRibbonExtensibility in addition to IDTExtensibility2 in order for us to be able to influence the Office ribbon.

So, naturally, the changes we’ll need to make will be to our unmanaged COM shim. The various file/class names I’ll be referencing here should be the ones generated by the Office COM shim wizard; if I’m making references to files that appear wholly foreign to you, let me know in the comments, as there is a chance that I’ve renamed them.

The first file we’ll be modifying is the header file for our outer aggregator (IOuterAggregator). This is the interface implemented by our ConnectProxy class and provided to a managed component so that the managed component can supply the shim with references to the various managed instances it either needs to use in response to queries or forward those queries to.

By default, the outer aggregator is used to supply a reference to only a single managed component, that being our Connect class. We need to be able to supply an additional reference to our Ribbon class, so we modify its only method’s signature to accept an additional IUnknown.

__interface __declspec(uuid("7b70c487-b741-4973-b915-c812a91bdf63"))
IOuterAggregator : public IUnknown
	HRESULT __stdcall SetInnerPointers(IUnknown *pUnkRibbon, IUnknown *pUnkBlind);

Don’t forget that there is also a managed COM import declaration of this type in either your add-in or (if you didn’t combine the two) your external “managed aggregator”. This will also need to be updated.

internal interface IOuterAggregator
    void SetInnerPointers(IntPtr pUnkRibbon, IntPtr pUnkBlind);

The next file we’ll be modifying is the header file for the class that acts as the proxy between Office and our managed components (ConnectProxy). The first modification we need to make is the addition of a separate pointer that will hold (indirectly of course) a reference to our managed Ribbon instance. We can simply add a new IUnknown declaration to the private section of the header file. In the end, we should have something like the following:

    IDTExtensibility2 *_pConnect;
    CLRLoader *_pCLRLoader;
    IUnknown *_pUnkBlind;
    IUnknown *_pUnkRibbon;

OBJECT_ENTRY_AUTO(__uuidof(ConnectProxy), ConnectProxy)

Above, we can see the new IUnknown declaration for our ribbon. And before you yell at me about using Hungarian notation, know that the code generated by the wizard uses COM notation, and I continue to do so as well so as to remain consistent.

Next we need to make some changes to the COM map declared in this header file. Normally, there is no mention of IRibbonExtensibility2 in our COM map, even if the COM shim wizard has been configured to support IRibbonExtensibility2, and that’s because of the following line:


This macro essentially causes queries for IIDs that do not match previous COM map entries to be forwarded to the provided pointer. So any queries for IRibbonExtensibility2 would hit up _pUnkBlind, which essentially points to the same thing pointed to by _pConnect.

Since we want our _pUnkRibbon instance to be forwarded ribbon related queries, we’ll need to it to the COM map. However, just adding an entry for IRibbonExtensibility2 is not enough; an entry for the IDispatch interface is also required. This is because the IDispatch interface provides the mechanic used for when we need to add callbacks to the various elements of a ribbon. Office sees that it needs to call a method named GetButtonLabel in order to get a button’s label, so it will query for IDispatch and use that to find a method named GetButtonLabel. Remember, we’re in unmanaged land, there’s no Reflection API to help us here.

Here’s what our COM map will look like when we’re done:

	COM_INTERFACE_ENTRY_AGGREGATE(__uuidof(IRibbonExtensibility), _pUnkRibbon)
	COM_INTERFACE_ENTRY_AGGREGATE(__uuidof(IDispatch), _pUnkRibbon)

The final change we need to make to this header file is related to the changes we made to the IOuterAggregator interface. Because ConnectProxy implements said interface, we need to propagate the changes we made to it to the proxy’s header file.

STDMETHOD(SetInnerPointers)(IUnknown* pUnkRibbon, IUnknown* pUnkBlind);

That’s it as far as the header file is concerned. Next, because we made changes to the header file, we’re going to make some changes to the actual class file. We added a new member to the class, so be sure to initialize the _pUnkRibbon pointer to null in the constructor’s initializer list, as well as add the necessary clean up logic for it in the FinalRelease method.

Other than those routine concerns, the main changes will be to the SetInnerPointers method. It needs to reflect the updated declaration we committed to the IOuterAggregator.h file, as well as store the additional ribbon pointer being provided to us.

HRESULT __stdcall ConnectProxy::SetInnerPointers(IUnknown* pUnkRibbon, IUnknown* pUnkBlind)
	if (pUnkRibbon == NULL || pUnkBlind == NULL)
        return E_POINTER;

    if (_pUnkRibbon != NULL || _pUnkBlind != NULL)
        return E_UNEXPECTED;

	_pUnkRibbon = pUnkRibbon;

    _pUnkBlind = pUnkBlind;

    return S_OK;

The last class we need to change is our managed implementation of the IInnerAggregator class. This is where the pointer to our now separate ribbon type will get provided to the shim, so we need to add the work required in order to do this to it.

internal sealed class InnerAggregator : IInnerAggregator
    /// <inheritdoc/>
    public void CreateAggregatedInstance(IOuterAggregator outerObject)
        IntPtr pOuter = IntPtr.Zero;
        IntPtr pBlindInner = IntPtr.Zero;
        IntPtr pRibbonInner = IntPtr.Zero;

            pOuter = Marshal.GetIUnknownForObject(outerObject);

            Connect connect = new Connect();
            Ribbon ribbon = new Ribbon();

            pBlindInner = Marshal.CreateAggregatedObject(pOuter, connect);
            pRibbonInner = Marshal.CreateAggregatedObject(pOuter, ribbon);

            outerObject.SetInnerPointers(pRibbonInner, pBlindInner);
            if (pOuter != IntPtr.Zero)
            if (pBlindInner != IntPtr.Zero)
            if (pRibbonInner != IntPtr.Zero)


And that does it. Any ribbon related queries made by Office should be correctly redirected to your managed Ribbon class, and connection related activities will continue to be covered by the managed Connect class. No additional registry entries need to be made relating to all of this, everything will be handled by the additional code we added. Enjoy slightly more neat and compartmentalized Office add-in code.


In a previous article I wrote, I introduced a way to compare expressions on the basis of their makeup as opposed to simple reference equality checks. This functionality was provided in the form of an IEqualityComparer<T> implementation, which itself was supported by several components.

Like all equality comparers, our expression equality comparer must be able to perform two functions:

  1. Determine the equality between two given expression instances
  2. Generate a hash code for a given expression instance

The previous article covered the first of these jobs; in this article, we’ll be covering the hash code generation aspect of the comparer.

Revisiting Our Equality Comparer

Just to make sure we’re all on board in regards to how and where our hash code generator will be used, let’s take a second to revisit the equality comparer presented in the previous article.

/// <summary>
/// Provides methods to compare <see cref="Expression"/> objects for equality.
/// </summary>
public sealed class ExpressionEqualityComparer : IEqualityComparer<Expression>
    private static readonly ExpressionEqualityComparer _Instance = new ExpressionEqualityComparer();

    /// <summary>
    /// Gets the default <see cref="ExpressionEqualityComparer"/> instance.
    /// </summary>
    public static ExpressionEqualityComparer Instance
        get { return _Instance; }

    /// <inheritdoc/>
    public bool Equals(Expression x, Expression y)
        return new ExpressionComparison(x, y).ExpressionsAreEqual;

    /// <inheritdoc/>
    public int GetHashCode(Expression obj)
        return new ExpressionHashCodeCalculator(obj).Output;

As we can see above, the brunt of the equality comparison work is performed by the ExpressionComparison class, which is a type that we covered in the first article in this series.

If you look at the code for the ExpressionComparison type, you’ll see that it is a derivative of the .NET provided ExpressionVisitor class. The reason why ExpressionComparison was subclassed from ExpressionVisitor is because hooking into that infrastructure is logically congruent with the structure of the expressions it would be comparing.

In order to take into account all the various types of expressions and their properties, we needed to override the majority of the virtual (Visit[x]) methods exposed by ExpressionVisitor. We did not have to override all of them, only the ones targeting expression types whose makeup was unique among all other types.

Just like we did with ExpressionComparison, our ExpressionHashCodeCalculator will also subclass ExpressionVisitor, and it will behave in much the same way, except it will be calculate a running total of the hash codes of all the various properties significant to the given type of expression.

Expression Hash Code Calculator

Now, let’s get into the meat of the matter. As usual, I need to state a disclaimer regarding the usage of the code before we go over it. Although I’ve personally tested all part of the code, I would encourage you to do so yourself before just plopping into what you’re doing.

This code has received a fair amount of scrutiny and is used in some important parts of a larger project I’ve been authoring. In fact, the hash code generation aspect of my comparer rests at the heart of the reasons why I started out on this endeavor (it was my wish to be able to use expressions as keys in a dictionary).

We’ll first take a look at the code for the ExpressionHashCodeCalculator type, and then discuss its merits afterwards.


Update: Thanks to Denis in the comments for pointing out that there was a lack of support for constant collections; code has now been updated to support constant collections.

/// <summary>
/// Provides a visitor that calculates a hash code for an entire expression tree.
/// This class cannot be inherited.
/// </summary>
public sealed class ExpressionHashCodeCalculator : ExpressionVisitor
    /// <summary>
    /// Initializes a new instance of the <see cref="ExpressionHashCodeCalculator"/> class.
    /// </summary>
    /// <param name="expression">The expression tree to walk when calculating the has code.</param>
    public ExpressionHashCodeCalculator(Expression expression)

    /// <summary>
    /// Gets the calculated hash code for the expression tree.
    /// </summary>
    public int Output
    { get; private set; }

    /// <summary>
    /// Calculates the hash code for the common <see cref="Expression"/> properties offered by the provided
    /// node before dispatching it to more specialized visit methods for further calculations.
    /// </summary>
    /// <inheritdoc/>
    public override Expression Visit(Expression node)
        if (null == node)
            return null;

        Output += node.GetHashCode(node.NodeType, node.Type);

        return base.Visit(node);

    /// <inheritdoc/>
    protected override Expression VisitBinary(BinaryExpression node)
        Output += node.GetHashCode(node.Method, node.IsLifted, node.IsLiftedToNull);

        return base.VisitBinary(node);

    /// <inheritdoc/>
    protected override CatchBlock VisitCatchBlock(CatchBlock node)
        Output += node.GetHashCode(node.Test);

        return base.VisitCatchBlock(node);

    /// <inheritdoc/>
    protected override Expression VisitConstant(ConstantExpression node)
        IEnumerable nodeSequence = node.Value as IEnumerable;

        if (null == nodeSequence)
            Output += node.GetHashCode(node.Value);
            foreach (object item in nodeSequence)
                Output += node.GetHashCode(item);

        return base.VisitConstant(node);

    /// <inheritdoc/>
    protected override Expression VisitDebugInfo(DebugInfoExpression node)
        Output += node.GetHashCode(node.Document,

        return base.VisitDebugInfo(node);

    /// <inheritdoc/>
    protected override Expression VisitDynamic(DynamicExpression node)
        Output += node.GetHashCode(node.Binder, node.DelegateType);

        return base.VisitDynamic(node);

    /// <inheritdoc/>
    protected override ElementInit VisitElementInit(ElementInit node)
        Output += node.GetHashCode(node.AddMethod);

        return base.VisitElementInit(node);

    /// <inheritdoc/>
    protected override Expression VisitGoto(GotoExpression node)
        Output += node.GetHashCode(node.Kind);

        return base.VisitGoto(node);

    /// <inheritdoc/>
    protected override Expression VisitIndex(IndexExpression node)
        Output += node.GetHashCode(node.Indexer);

        return base.VisitIndex(node);

    /// <inheritdoc/>
    protected override LabelTarget VisitLabelTarget(LabelTarget node)
        Output += node.GetHashCode(node.Name, node.Type);

        return base.VisitLabelTarget(node);

    /// <inheritdoc/>
    protected override Expression VisitLambda<T>(Expression<T> node)
        Output += node.GetHashCode(node.Name, node.ReturnType, node.TailCall);

        return base.VisitLambda(node);

    /// <inheritdoc/>
    protected override Expression VisitMember(MemberExpression node)
        Output += node.GetHashCode(node.Member);

        return base.VisitMember(node);

    /// <inheritdoc/>
    protected override MemberBinding VisitMemberBinding(MemberBinding node)
        Output += node.GetHashCode(node.BindingType, node.Member);

        return base.VisitMemberBinding(node);

    /// <inheritdoc/>
    protected override Expression VisitMethodCall(MethodCallExpression node)
        Output += node.GetHashCode(node.Method);

        return base.VisitMethodCall(node);

    /// <inheritdoc/>
    protected override Expression VisitNew(NewExpression node)
        Output += node.GetHashCode(node.Constructor);

        return base.VisitNew(node);

    /// <inheritdoc/>
    protected override Expression VisitParameter(ParameterExpression node)
        Output += node.GetHashCode(node.IsByRef);

        return base.VisitParameter(node);

    /// <inheritdoc/>
    protected override Expression VisitSwitch(SwitchExpression node)
        Output += node.GetHashCode(node.Comparison);

        return base.VisitSwitch(node);

    /// <inheritdoc/>
    protected override Expression VisitTypeBinary(TypeBinaryExpression node)
        Output += node.GetHashCode(node.TypeOperand);

        return base.VisitTypeBinary(node);

    /// <inheritdoc/>
    protected override Expression VisitUnary(UnaryExpression node)
        Output += node.GetHashCode(node.IsLifted, node.IsLiftedToNull, node.Method);

        return base.VisitUnary(node);

As you can see from the above code, the general method for calculating the hash code of a given expression is by visiting all its manifestations and parts and then generating hash codes from the properties significant to those parts.

The GetHashCode methods being invoked in the code are not native to the types they are being invoked on. Rather, they are extension methods, and are something I talked about in another article I wrote.

Each part of the calculate tacks on the total value of the various parts to an Output property, which holds the running total for our hash code. Calculation will end upon our Visit override being executed with a null node being passed.

There are many virtual Visit[x] methods offered by the ExpressionVisitor type. As I stated in the first article of this series, there were in general a large number of new types of expressions added to the .NET framework with 4.0.

When creating our ExpressionComparison class, we overrode many of these methods, but only the ones that held some bearing on the actual shape of the expression as far as an equality check was concerned. The same applies to our hash code calculator; it visits many of the same parts as was visited by the equality comparer, with some differences, namely, some overrides found in ExpressionComparison are not found in ExpressionHashCodeCalculator, and vice versa.

The reasons for not overriding a particular virtual method typically boil down to a lack of properties offered from which to grab hash codes that wouldn’t be offered in another, more centrally invoked override.

For example, one virtual method not overridden is the VisitBlock, method. This method accepts a single BlockExpression typed parameter. If we look at the BlockExpression type, we’ll notice that it offers no additional properties unique to what’s offered by more base Expression types. Well…at least except for the Result property. But even the presence of this property is not cause enough for us to override the method, the reason being that the Result property itself (which is an Expression) is visited by the base VisitBlock implementation, and therefore would be end up being visited by another block of our code anyways.

There are a few more methods not included, but I’ll leave them as an exercise for the reader.

If anyone finds any types of expressions that the above code does not account for, I’d appreciate your input. When constructing this code, however, I tried to be fairly exhaustive in my efforts.


This article is meant to serve as a reference for a particular set of functions that may be present in code snippets found in subsequent articles.

Every so often one may find themselves tasked with writing hash code generation algorithms for a particular type of object. This sort of requirement typically arises whenever we’re authoring either a value type or an implementation of an interface which requires such functionality (e.g. IEqualityComparer<T>).

While the way hash codes end up being calculated tends to differ between object types, hash code generation mechanisms can only be considered proper if the following requisites are met:

  1. If two objects are deemed equal, then the hash code generation mechanism should yield an identical value for each object.
  2. Given an instance of an object, its hash code should never change (thus using mutable objects as hash keys and actually mutating them is generally a bad idea).
  3. Although it is acceptable for the same hash code to be generated for objects instances which are not equal, the way in which the hash code is calculated should be such so that these kinds of collisions are as infrequent as possible.
  4. Calculation of the hash code should not be an expensive endeavor.
  5. The generation mechanism should never throw an exception.

Naturally, it makes sense to abstract the steps involved in satisfying such requirements into an independent function which can be used by any kind of object. Unfortunately, for the most part, it simply isn’t possible to guarantee the satisfaction of  points #1 and #2 with common code. We can, however, build something that contributes towards the success of #3.

To do this, we can create a set of functions that calculate a hash code based on the property values provided to them, with each function differing in the number of properties that they accept. A static helper class could serve as a home for these functions, but since I like to avoid static helper classes whenever possible, I thought it prudent to craft them as extension methods instead, such as the one shown below:

/// <summary>
/// Calculates the hash code for an object using two of the object's properties.
/// </summary>
/// <param name="value">The object we're creating the hash code for.</param>
/// <typeparam name="TFirstProperty">The type of the first property the hash is based on.</typeparam>
/// <typeparam name="TSecondProperty">The type of the second property the hash is based on.</typeparam>
/// <param name="firstProperty">The first property of the object to use in calculating the hash.</param>
/// <param name="secondProperty">The second property of the object to use in calculating the hash.</param>
/// <returns>
/// A hash code for <c>value</c> based on the values of the provided properties.
/// </returns>
/// ReSharper disable UnusedParameter.Global
public static int GetHashCode<TFirstProperty,TSecondProperty>([NotNull]this object value,
                                                              TFirstProperty firstProperty,
                                                              TSecondProperty secondProperty)
        int hash = 17;

        if (!firstProperty.IsNull())
            hash = hash * 23 + firstProperty.GetHashCode();
        if (!secondProperty.IsNull())
            hash = hash * 23 + secondProperty.GetHashCode();

        return hash;

Note: IsNull is another extension method that properly deals with checking if an instance is null when we don’t know whether we’re dealing with a value or reference type.

We’re using two prime numbers (17 and 23) for our seeds to aid in the effort of reducing collisions. It can be argued that they are not optimal; however, that would most likely end up being a weak argument, and such a discussion is not even in the scope of this article. The values originate from other sources out there that address the issue of collisions (e.g. We have the code inside an unchecked block so that overflows do not result in exceptions.

Again, this could easily be placed into a helper class instead; regardless, I wanted to tap into the portability offered by extension methods. Also, given that all objects have a GetHashCode method, I don’t view the extension of the object type in this manner as harmful, even if we aren’t actually using the source object itself.

I have about six of these methods, with the number of parameters accepted ranging from one to six. All of this is neither earth shattering nor rocket science. As I stated at the beginning, I’m mainly sharing this with you, the reader, so I can refer to this if questions arise from their usage in articles subsequent to this one.


As I have stated elsewhere on this site, it is my intent to limit the scope of the articles I write to areas that fall within my field of expertise. Once again, however, I desire to break away from my usual routine to cover another legislative issue making the rounds in the current events sphere; namely, the bill recently passed by Congress to avert the “fiscal cliff” (H.R. 8).

Whenever a great deal of press is being generated by a piece of legislation, I always find it most informative and therapeutic to ignore most of the chatter and read the raw text of the bill yourself. Obviously, this will allow one to reach their own conclusions regarding the item(s) at hand.

If the first paragraph has not made this clear to you already: know that I am neither a lawyer nor an economist.

Objective and Scope

My intention at the onset of analyzing the newly made law was to determine exactly how the “fiscal cliff” was averted. I had studied this issue before when the Budget Control Act of 2011 was passed in order to understand the inner workings of the sequestration mechanism, and had gained such an understanding to an adequate degree. Thus, as soon as H.R. 8 was passed on January 2nd, I wanted to find out answers to the following questions:

  1. Were the automatic sequestration cuts avoided by reaching the budget goal within the parameters set forth in the Budget Control Act of 2011?
  2. Were the automatic sequestration cuts avoided simply by amending the nature of the budget goal’s enforcement?
  3. How does H.R. 8 impact the long term deficit reduction goal?


The specific piece of legislation here is titled as the American Taxpayer Relief Act of 2012. It is a large document, however only a small portion of it is relevant to the objective at hand. Specifically, we’re going to be taking a look at the majority of Section 901, which makes modifications to various aspects of the sequestration mechanism.

It is an amendment to the same section in the Balanced Budget and Emergency Deficit Control Act of 1985 that was amended by the Budget Control Act of 2011 in order to (in part) create the much talked about automatic deficit reductions. Of course, because it is an amendment, simply reading it doesn’t tell you so much, as you should immediately see upon reading it.

Total Deficit Reduction Calculation Changes

The first paragraph we’ll be poring over is Section 901(a), which makes modifications to the enforcement mechanism of the automatic sequestration cuts.

Section 901(a) of the American Taxpayer Relief Act of 2012
(a) Adjustment- Section 251A(3) of the Balanced Budget and Emergency Deficit Control Act of 1985 is amended--
 (1) in subparagraph (C), by striking `and' after the semicolon;
 (2) in subparagraph (D), by striking the period and inserting` ; and'; and
 (3) by inserting at the end the following:
 `(E) for fiscal year 2013, reducing the amount calculated under subparagraphs (A) through (D) by $24,000,000,000.'.

Hmm, although incremental changelogs are swell, this isn’t very helpful without seeing the text that’s being amended. Remember, Section 251A didn’t even exist in the Balanced Budget and Emergency Deficit Control Act of 1985 until the Budget Control Act of 2011 was passed. Let’s take a look at the effective text of Section 251A(3) of the BCA as it existed before this most recent law, then:

Section 251A(3) of the Balanced Budget and Emergency Deficit Control Act of 1985 as amended by Section 302(a) of the Budget Control Act of 2011
(3) CALCULATION OF TOTAL DEFICIT REDUCTION- OMB shall calculate the amount of the deficit reduction required by this section for each of fiscal years 2013 through 2021 by--
 (A) starting with $1,200,000,000,000;
 (B) subtracting the amount of deficit reduction achieved by the enactment of a joint committee bill, as provided in section 401(b)(3)(B)(i)(II) of the Budget Control Act of 2011;
 (C) reducing the difference by 18 percent to account for debt service; and
 (D) dividing the result by 9.

So, because of the amendment put in place by the American Taxpayer Relief Act, this text now looks like:

Section 251A(3) of the Balanced Budget and Emergency Deficit Control Act of 1985 as amended by Section 901(a) of the American Taxpayer Relief Act of 2012
(3) CALCULATION OF TOTAL DEFICIT REDUCTION- OMB shall calculate the amount of the deficit reduction required by this section for each of fiscal years 2013 through 2021 by--
 (A) starting with $1,200,000,000,000;
 (B) subtracting the amount of deficit reduction achieved by the enactment of a joint committee bill, as provided in section 401(b)(3)(B)(i)(II) of the Budget Control Act of 2011;
 (C) reducing the difference by 18 percent to account for debt service;
 (D) dividing the result by 9; and for fiscal year 2013, reducing the amount calculated under subparagraphs (A) through (D) by $24,000,000,000.

Alright. So, the total deficit reduction prior to this most recent law being passed would have been equal to 82% of the difference between $1,200,000,000,000 and an unknown amount which we have to calculate, all divided by 9 (which I assume refers to the time frame of this deficit reduction plan). However, with the American Taxpayer Relief Act of 2012, that value is being reduced by $24,000,000 for just this year.

In order to figure out the significance of the $24,000,000,000, we’ll need to fill in the rest of this little puzzle here by taking a look at Section 401(b)(3)(B)(i)(II).

See? Isn’t reading the law fun? It’s like a bunch of GOTO statements that control your life!

Section 401(b)(3)(B) of the Budget Control Act of 2011
 (i) IN GENERAL- Not later than November 23, 2011, the joint committee shall vote on--
 (I) a report that contains a detailed statement of the findings, conclusions, and recommendations of the joint committee and the estimate of the Congressional Budget Office required by paragraph (5)(D)(ii); and
 (II) proposed legislative language to carry out such recommendations as described in subclause (I), which shall include a statement of the deficit reduction achieved by the legislation over the period of fiscal years 2012 to 2021.

Ah yes, we can see the Joint Select Committee on Deficit Reduction being referred to in subclause (II). Well, as we all know, the committee failed to reach an agreement and was terminated January 31st, 2012. So, I guess that value comes out to be “0″.

Using this knowledge, we now know that the previously required deficit reduction for 2012 was in the ballpark of $109,333,333,333. With the recent law being passed, that figure has dropped to around $85,333,333,333.

So we now know that the part of the law which essentially both triggered and determined the amount of the automatic cuts was changed to be “less severe”. While interesting, this does not, at least by itself, tell us how the “fiscal cliff” was averted. So we read on.

Postponement of the Sequestration Date

The next part of Section 901 is purely temporal in nature.

Section 901(b) and 901(c) of the American Taxpayer Relief Act of 2012
(b) After Session Sequester- Notwithstanding any other provision of law, the fiscal year 2013 spending reductions required by section 251(a)(1) of the Balanced Budget and Emergency Deficit Control Act of 1985 shall be evaluated and implemented on March 27, 2013.
(c) Postponement of Budget Control Act Sequester for Fiscal Year 2013- Section 251A of the Balanced Budget and Emergency Deficit Control Act of 1985 is amended--
 (1) in paragraph (4), by striking `January 2, 2013' and inserting `March 1, 2013'; and
 (2) in paragraph (7)(A), by striking `January 2, 2013' and inserting `March 1, 2013'.

Although nothing in the above paragraphs explicitly state that the point in time in which the automatic cuts would occur has been delayed, the amending of dates in the law would certainly seem to indicate so. The first paragraph mentions the date March 27th, 2013, and the second paragraph throws March 1st, 2013 at us.

The former is applied “globally”, whereas the latter replaces “January 2, 2013″ in the following texts:

Section 251A(4) of the Balanced Budget and Emergency Deficit Act of 1985 as amended by Section 302(a) of the Budget Control Act of 2011
(4) ALLOCATION TO FUNCTIONS.—On January 2, 2013, for
 fiscal year 2013, and in its sequestration preview report for
 fiscal years 2014 through 2021 pursuant to section 254(c), OMB
 shall allocate half of the total reduction calculated pursuant
 to paragraph (3) for that year to discretionary appropriations
 and direct spending accounts within function 050 (defense function)
 and half to accounts in all other functions (nondefense
Section 251A(7)(A) of the Balanced Budget and Emergency Deficit Act of 1985 as amended by Section 302(a) of the Budget Control Act of 2011
 (A) FISCAL YEAR 2013.—On January 2, 2013, for fiscal
 year 2013, OMB shall calculate and the President shall
 order a sequestration, effective upon issuance and under
 the procedures set forth in section 253(f), to reduce each
 account within the security category or nonsecurity category
 by a dollar amount calculated by multiplying the
 baseline level of budgetary resources in that account at
 that time by a uniform percentage necessary to achieve—

So it is now quite apparent that the new law has changed the time at which the automatic cuts would occur for this year (and this year alone). While it is unclear to me as to the significance of throwing March 27th, 2013 into the mix, it is very clear that the “fiscal cliff” has not completely been averted, rather the point in time in which its true onus could be felt has merely been moved to March 1st, 2013.

Instead of happening now, the cuts, if they do happen, would occur on the aforementioned date. This would seem to further indicate that some additional legislation is required in order to avoid the automatic cuts; something which, in the political landscape today, is hardly guaranteed.

One Year Stagnation of Limits

Moving on, we see some adjustments being made to the discretionary appropriation limits for the current and ensuing fiscal years.

Section 901(d) of the American Taxpayer Relief Act of 2012
(d) Additional Adjustments-
 (1) SECTION 251- Paragraphs (2) and (3) of section 251(c) of the Balanced Budget and Emergency Deficit Control Act of 1985 are amended to read as follows:
 `(2) for fiscal year 2013--
 `(A) for the security category, as defined in section 250(c)(4)(B), $684,000,000,000 in budget authority; and
 `(B) for the nonsecurity category, as defined in section 250(c)(4)(A), $359,000,000,000 in budget authority;
 `(3) for fiscal year 2014--
 `(A) for the security category, $552,000,000,000 in budget authority; and
 `(B) for the nonsecurity category, $506,000,000,000 in budget authority;'.

You can find the dollar amounts being replaced by referring to the Budget Control Act of 2011, which was responsible for adding Section 251(c) to the 1985 bill.

Section 251(c) of the Balanced Budget and Emergency Deficit Control Act of 1985 as amended by the Budget Control Act of 2011
(c) DISCRETIONARY SPENDING LIMIT.—As used in this part,
 the term ‘discretionary spending limit’ means—
 (1) with respect to fiscal year 2012—
   (A) for the security category, $684,000,000,000 in new
budget authority; and
   (B) for the nonsecurity category, $359,000,000,000 in
new budget authority;
 (2) with respect to fiscal year 2013—
   (A) for the security category, $686,000,000,000 in new
budget authority; and
   (B) for the nonsecurity category, $361,000,000,000 in
new budget authority;

The most recent amendment effectively renders the discretionary spending limit in fiscal year 2013 to be identical to the limit in effect for 2012. I cannot offer any insight as to the “why” behind this, however, I can add that the spending limit, in the original text of the 2011 act, was meant to be increased each year through 2021.

It also defines the limits for the individual categories for fiscal year 2014. These previously were left as a sum total in the 2011 act. They appear to further reflect a shrinking military and a growing amount of spending in other things.

Voodoo Magic

Finally, the last bit of the recently passed American Taxpayer Relief Act of 2012 that I’ll cover deals with additional adjustments made to the limits done in a very interesting way.

Section 901(e) of the American Taxpayer Relief Act of 2012
(e) 2013 Sequester- On March 1, 2013, the President shall order a sequestration for fiscal year 2013 pursuant to section 251A of the Balanced Budget and Emergency Deficit Control Act of 1985, as amended by this section, pursuant to which, only for the purposes of the calculation in sections 251A(5)(A), 251A(6)(A), and 251A(7)(A), section 251(c)(2) shall be applied as if it read as follows:
 `(2) For fiscal year 2013--
 `(A) for the security category, $544,000,000,000 in budget authority; and
 `(B) for the nonsecurity category, $499,000,000,000 in budget authority;'.

A bit of voodoo going on here. The numbers above reflect the amount of spending that would be safe from being automatically cut in both the “nonsecurity category” and “security category”. The “security category” refers to all discretionary appropriations in budget function 050 (National Defense), whereas the “nonsecurity category” is every other category.

Even though the limits effective for 2013 now reflect what was amended by Section 251(c), this last amendment here instructs us to ignore those limits for several types of calculations. Instead of $684 billion for defense spending, $544 billion is now the limit. Regarding everything else, there is now a $499 billion limit instead of $361 billion. Seems to be a rather drastic reduction in the ratio between Defense to all other spending.

What to Take Away

One would do well to draw their own conclusions from the above data. For myself, I felt that my questions were, for the most part, answered, and that I gained the following insights:

  1. The automatic cuts were not averted under the original parameters of the 2011 act.
  2. The automatic cuts were averted, for now, simply by delaying the date at which they would occur.
  3. The amount of deficit reduction required in order to avoid the cuts for 2012 is now around $85,333,333,333, or around a 22% reduction in what it previously was.
  4. Given that the required amount of cuts has been reduced by 22% for this 2012, it does not seem that H.R.8 helps the overall deficit picture.
  5. The discretionary spending limits were amended to reflect the limits of last year in an attempt, I believe, to make up “lost ground” in the effort of reducing the deficit long term, although it would be preferable to have an official answer behind the adjustments as opposed to mere assumptions.
  6. The just mentioned spending limit adjustments, however, do not seem to actually reflect the effective limits, which are put in place by the magic of Section 901(e). The limits mentioned there seem to be the ones used in all the important calculations; I’m unsure exactly where they are not used.
  7. The limits mentioned in Section 901(e) are concerning somewhat both in the manner they’re introduced as well as both the large cut being made to defense spending and large allowance being granted to all other matters.

So, we’ve covered many things, and just in matters concerning the sequestration mechanism itself. I’m sure there are many other conclusions one could derive from the other portions of the law as well, but the strongest impression I’m left with is that incentivizing law makers to behave a certain way in order to avoid punishments meted out by the law, the very thing they malleate day to day, appears to be a pointless exercise.


I’ve been swimming in a sea of COM Interop lately. Some time ago, I wrote an article that had some important tidbits regarding the nature of Runtime Callable Wrappers, or RCWs.

I’d thought I’d bring to the surface one of the more important questions I answered in that article, specifically: What actions, when executed, incur an increment to an RCW’s reference count?

RCW Reference Counting Rules

When dealing with COM Interop, we find ourselves no longer in the safe confines provided by the .NET garbage collector. Sure, garbage collection will still occur eventually, however great care must be taken if our software is both making use of various COM types and expected to still operate stably.

If our code happens to be making use of (or being made use of by) something like a COM automation server, where perhaps our product is a minor act in a larger show, this becomes even more important.

The following is a listing of the actions that DO cause the reference count on RCWs to be incremented:

  1. Requesting a COM Object from Another COM Object.
    1. Whenever we seek treasures from COM-land, the reference counting gods take note and record it in their books. This applies to objects returned by both methods and properties.
    2. The following is several examples of the reference count being incremented due to the calling of methods and properties (we’ll be using Outlook’s object model for these examples):
      Explorer first = Application.ActiveExplorer();
      Explorer second = Application.ActiveExplorer();
      Inspectors inspectors = Application.Inspectors;
    3. If I put one of these Explorer instances through Marshal.ReleaseComObject, you should expect to get a return value of 1, indicating one reference yet remains. And, in regards to the Outlook Object Model, I know that this will be the case.
    4. However, this will not necessarily occur with any other type library. Some COM objects, depending on their own internal design, will appear to allocate new memory each time you access a property. When this is the case, the .NET runtime will think that it is dealing with a never-before-seen COM object, and will generate a new RCW for it.
      1. What actually is most likely happening under the hood with this is that the COM object, when returning some additional COM data, is actually returning a proxy to that data. When a type happens to do this, you will actually end up with multiple RCWs (indirectly) pointing to a single location in memory. When the RCW is released, it’ll clean up the proxy, but not until all of those are gone will the actual COM instance living in memory will go bub-bye.
      2. Unfortunately, this revelation has ill tidings for the use of Marshal.FinalReleaseComObject. You cannot be assured you are actually releasing a COM object simply by using this method unless you know that the Interop you’re dealing with behaves in a manner allowing such a thing. This should lend credence to the idea that vigilance must be maintained when dealing with COM Interop.
  2. Handling Events Originating from a COM Object
    1. If you have an event handling method subscribed to an event published by a COM object, then any COM type parameters will have their reference counts incremented when that handler is called.
    2. The following is an example of such an event handler:
      private void HandleBeforeFolderSwitch(object newFolder, ref bool cancel)
    3. The above event handler is handles the Explorer.BeforeFolderSwitch event. Although the first parameter is (for some reason) an object type, it is actually a Folder type. Because your event handler (should) only be called from COM-land itself, you should treat these objects as newly created or recently reference-count-incremented RCWs.

The following is a listing of the actions that DO NOT cause the reference count on RCWs to be incremented (obviously not exhaustive):

  1. Casting a COM Instance to a Different Interface Type
    1. If an object implements several COM interfaces, and you cast it from one to another, your RCW will not increment its count.
    2. Indeed, you can be sure that when you do make the cast, that what you are getting back is the very same RCW, yet without any count having been incremented.
    3. Here’s an example of such a cast, this time using Redemption’s object model:
      RDOMail message = session.GetMessageFromID(entryId, storeId);
      RDOAppointmentItem appointment = (RDOAppointmentItem) message;
    4. If you look at both of the RDOMail and RDOAppointmentItem objects, you will see that they point to the same place in memory.
    5. If you pass one of the two items to a Marshal.ReleaseComObject call, you will also see that a 0 will be returned, indicating that there was only ever a reference count of one.
  2. Moving the COM Instance Around in your Managed Code
    1. This is an important one to know for the paranoid out there — and probably the one that most find themselves thinking about.
    2. Passing an instance of a COM type to another managed method does not increment its reference count.
    3. Storing a reference to the COM instance in a class member, stuffing it into a collection, etc., does not increment its reference count.

When working with .NET, it is lovely when everything we’re interacting with is rooted in .NET code; however, sometimes we need functionalities that can only be found in one or more COM types.

Something that should cross our minds when dealing with COM types is what the implications may be for deployment efforts. Depending on the specific COM type, its type library may not be a stock component of a Windows installation. If that’s the case (and sometimes, regardless of the case), we should all be prepared for the possibility of the following occurring when instantiating one of the COM types:

COMException: Class not registered (Exception from HRESULT: 0×80040154 (REGDB_E_CLASSNOTREG))

What Not to Do

Don’t let this exception get thrown without handling it (and I mean actually handling it, not just logging its occurrence and re-throwing it or some such).

Nothing is more confusing to your end users than COM-related errors (they tend to evoke enough confusion among developers themselves).

If you attempt to handle it and fail, then you will probably have to bubble up the exception (albeit without the HRESULT part; surely we can think of something more readable). But…at least you tried!

What to Do

First and foremost, and slightly off-topic, you can protect your product from almost ever encountering one of these errors by taking the proper procedures during its deployment. When installing .NET components that are dependent on COM types, your installer should register those COM types, unless for reasons legal or otherwise, you aren’t installing the COM libraries your product is dependent on.

Your installer should not register those COM types by making use of the self-registration capabilities exported by the COM libraries themselves (we will only be tapping into that when handling the above exception at run-time). Rather, we should know beforehand what entries need to be added into the registry, and have our installer manually do that. This is ancient wisdom shared among Those Who Know, and I shall leave it up to the reader to figure out how one goes about doing this.

If your .NET program is making use of one or more COM types, you should prepare for the possibility of the above exception occurring and go about solving it in the most direct manner: by registering the type!

You should do this even if you are protecting yourself during installation as described above, and even if the type originates from a library that comes standard on most machines. I don’t have any details for specific cases on hand, but I’ve seen Windows Update (or some other unknown externality) knock the registration out of some important (though I suppose, not critically important) COM types among end users of one product I deal with.

There’s two ways to go about registering the COM type:

  1. By creating a regsvr32.exe process with arguments supplied pointing it to the COM DLL file
  2. By doing what regsvr32.exe does with our own code

Clearly #2 is the winner here; I’d have to be completely out of other options before ever doing anything that requires the creation of a separate process.

If we’re going to do #2 though, we need to know a bit on how COM registration works. If you’ve ever developed your own COM library, you’re probably quite familiar with the process. But for everyone else, we’ll do a brief overview.

COM Registration in a Nutshell

There are many ways that the registration of COM type libraries occurs, however we will be focusing on a specific aspect of COM registration: self-registration.

Most COM modules exhibit the capability of registering themselves. This is what regsvr32.exe taps into when doing its thing.

A COM module that can self register exports a number of functions, one of them being DllRegisterServer. We need to import this function, and execute it. Therefore, we need to design a .NET type that can do the following:

  1. Load the COM module given a file name or path using LoadLibrary
  2. Import the exported DllRegisterServer function using GetProcAddress
  3. Execute the imported function.
  4. Handle the result returned from doing #3.

The rest of this article details the creation of such a class.

P/Invoke Imports

To accomplish what we’re looking to do, there are a number of unmanaged functions we need to make use of. Being the good developers that we are, we will be adding all these imports to our NativeMethods class.

First off, we will need some way to load the COM libraries, thus we need to import the LoadLibrary function:

/// <summary>
/// Loads the specified module into the address space of the calling process.
/// </summary>
/// <param name="fileName">The name of the module.</param>
/// <returns>If successful, a handle pointing to the loaded module; otherwise, an invalid handle.</returns>
[DllImport("kernel32", CharSet=CharSet.Unicode, SetLastError=true)]
internal static extern LibraryHandle LoadLibrary(string fileName);


LoadLibrary in NativeMethods.cs

Conversely, we’ll need a way to unload these modules, thus we need to import the FreeLibrary function:

/// <summary>
/// Frees the loaded dynamic-link library module and, if necessary, decrements its reference count.
/// </summary>
/// <param name="hModule">A handle to the loaded library module.</param>
/// <returns>If successful, a nonzero value; otherwise, zero.</returns>
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
[DllImport("kernel32", SetLastError=true)]
[return: MarshalAs(UnmanagedType.Bool)]
internal static extern bool FreeLibrary(IntPtr hModule);

FreeLibrary in NativeMethods.cs

Next up, we need to be able to import the exported DllRegisterServer function, thus we need to import the GetProcAddress function:

/// <summary>
/// Retrieves the address of an exported function or variable from the specified dynamic-link library.
/// </summary>
/// <param name="hModule">A handle to the loaded library module.</param>
/// <param name="lpProcName">The function name, variable name, or the function's ordinal value.</param>
/// <returns>If successful, the address of the exported function is returned; otherwise, a null pointer.</returns>
[DllImport("kernel32", CharSet=CharSet.Ansi, ExactSpelling=true, BestFitMapping=false, ThrowOnUnmappableChar=true, SetLastError=true)]
internal static extern IntPtr GetProcAddress(LibraryHandle hModule, [MarshalAs(UnmanagedType.LPStr)] string lpProcName);

GetProcAddress in NativeMethods.cs

You may have noticed a type that does not exist in your environment, LibraryHandle, which brings us to the next part of this article…

Level-0 Type: LibraryHandle

For our objective, we need to define a new level-0 type. If you don’t understand the meaning of the term level-0 type, then I suggest getting acquainted with the following article.

The level-0 type in particular that we need to define is one that will safely keep ahold of our loaded module handles. Also, as we should all know by now, these module handles need to be cleaned up by executing the FreeLibrary function.

The following is the definition of our type (and note, I am not adhering to the standard Microsoft uses where they prefix all these sort of types with the word Safe. Frankly, although it may make sense for Microsoft, I would find it ridiculous to be using it myself):

/// <summary>
/// Provides a level-0 type for library module handles.
/// </summary>
internal sealed class LibraryHandle : SafeHandleZeroOrMinusOneIsInvalid
    /// <summary>
    /// Initializes a new instance of the <see cref="LibraryHandle"/> class.
    /// </summary>
    private LibraryHandle()
        : base(true)
    { }

    /// <inheritdoc/>
    [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
    protected override bool ReleaseHandle()
        return NativeMethods.FreeLibrary(handle);


Level-1 Type: COMServer

With everything in place, we can now implement our COM registration utility. Note that the following code makes use of some in-house types and extension methods I’ve made; I haven’t the heart to change them, and they should be easy enough to figure out a substitute for:

/// <summary>
/// Provides an in-process server that provides COM interface implementations to clients.
/// </summary>
/// <remarks>
/// <para>
/// This class, as it is currently implemented, mainly handles the registration responsibilities of a COM
/// server, and thusly, in a sense, functions as a level-1 type for COM library handles.
/// </para>
/// <para>
/// This class provides COM server functionalities, it is obviously not an actual COM server (i.e. this
/// class will never handle calls made to <c>QueryInterface</c>).
/// </para>
/// </remarks>
public sealed class COMServer : IDisposable
    /// <summary>
    /// Instructs an in-process server to create its registry entries for all classes supported in a server
    /// module.
    /// </summary>
    /// <returns>The result of the operation.</returns>
    internal delegate ResultHandle DllRegisterServer();

    private readonly LibraryHandle _libraryHandle;

    /// <summary>
    /// Initializes a new instance of the <see cref="COMServer"/> class.
    /// </summary>
    /// <param name="libraryHandle">A handle to the COM library to provision.</param>
    private COMServer(LibraryHandle libraryHandle)
        _libraryHandle = libraryHandle;

    /// <summary>
    /// Creates an in-process server that will provide the COM interface implementations found in the
    /// specified COM library to the client.
    /// </summary>
    /// <param name="fileName">The name or path of the COM library module to load.</param>
    /// <returns>
    /// A <see cref="COMServer"/> instance representing a provisioning COM server of the library
    /// specified by <c>fileName</c>.
    /// </returns>
    public static COMServer Create([NotNullOrEmpty]string fileName)
        LibraryHandle libraryHandle = NativeMethods.LoadLibrary(fileName);

        if (libraryHandle.IsInvalid)
            int lastError = Marshal.GetLastWin32Error();

                case (int)ErrorCode.FileNotFound:
                    throw new FileNotFoundException(Strings.COMFileNotFound, fileName);
                case (int)ErrorCode.InvalidName:
                    throw new ArgumentException(Strings.COMInvalidSyntaxFormat.InvariantFormat(fileName),
                case (int)ErrorCode.ModuleNotFound:
                    throw new ArgumentException(Strings.COMModuleNotFoundFormat.InvariantFormat(fileName),
                    throw Marshal.GetExceptionForHR(Marshal.GetHRForLastWin32Error());

        return new COMServer(libraryHandle);

    /// <summary>
    /// Instructs the server to create registry entries for all the classes it has loaded.
    /// </summary>
    public void Register()
        DllRegisterServer registerServer = ImportRegistrationServer();

        ResultHandle hResult = registerServer();


    /// <inheritdoc/>
    public void Dispose()
        if (!_libraryHandle.IsInvalid)

    /// <summary>
    /// Validates that registration succeeded, otherwise an exception is thrown.
    /// </summary>
    /// <param name="hResult">The result of the registration operation.</param>
    private static void ValidateRegistration(ResultHandle hResult)
        if (hResult.Successful())

        string message;
        switch (hResult)
            case ResultHandle.TypeLibraryRegistrationFailure:
                message = Strings.COMTypeLibRegistration;
            case ResultHandle.ObjectClassRegistrationFailure:
                message = Strings.COMObjectClassRegistration;
                message = Strings.COMInternalErrorFormat.InvariantFormat(hResult);

        throw new Win32Exception(message, hResult.GetException());

    /// <summary>
    /// Imports the exported registration procedure from the loaded COM library.
    /// </summary>
    /// <returns>
    /// The <see cref="DllRegisterServer"/> registration procedure exported by the loaded COM library.
    /// </returns>
    private DllRegisterServer ImportRegistrationServer()
        IntPtr registrationAddress = NativeMethods.GetProcAddress(_libraryHandle, "DllRegisterServer");

        if (IntPtr.Zero == registrationAddress)
            int lastError = Marshal.GetLastWin32Error();

            switch (lastError)
                case (int)ErrorCode.ProcedureNotFound:
                    throw new InvalidOperationException(Strings.COMNoRegistration);
                    throw Marshal.GetExceptionForHR(Marshal.GetHRForLastWin32Error());

        return (DllRegisterServer) Marshal.GetDelegateForFunctionPointer(registrationAddress,
                                                                         typeof (DllRegisterServer));


To make use of our new registration utility, we would do something like the following:

using (COMServer server = COMServer.Create("NameOfCOMLibrary.dll"))

Use of COMServer



DataTemplates are essential tools provided by WPF that allow us to describe the visual structure of  any type of object.

When declaring a DataTemplate, one typically is doing so with the intention of targeting a specific type (or valid subtype) of object. This is done by setting the DataType property of the DataTemplate to the Type of object we’re targeting.

Because the data templating system matches templates not only against instances of the specified type, but against derived types in the specified type’s hierarchy, DataTemplates are an easy way to visually shape large swathes of your own types of interest…

…As Long as the DataType Isn’t an Interface

WPF’s data templating system does not support the explicit targeting of interface types. If you mean to target a “base type” from which many other types descend, then you will have to make do with a standard object type; an abstract class is the closest thing to an interface that we’re allowed to target with DataTemplates.

This may be received by some as saddening news, as many legitimate object hierarchies exist out there whose members only share a specific interface in common. This limitation of WPF would then force an individual working with such an object hierarchy to create a base type (either to implement or wholly replace the interface) solely to get WPF to play nice with their model.

No one likes to be made to do something seemingly arbitrary and without true purpose; fortunately for us, there is a way we can add interface support to DataTemplates. Before we get into that, however, let’s think as to why WPF’s data templating system lacks support for interfaces.

The lack of interface support was the result of a call made by the early developers of the framework, and I believe it was the right one. When designing a frameworks like the greater .NET Framework or WPF, one must design with the intent of producing a framework which behaves in an expected and stable manner. Adding support for a typical object’s type hierarchy is a straight forward objective, as the type hierarchy will consist of a very well defined order of types that can easily be traversed.

There are too many potential caveats when one starts talking about adding support for interfaces, mainly due to the multiple inheritance angle of interfaces; you can easily get into situations where an object implements two interfaces which are also targeted by two separate DataTemplates.

Luckily for us, we aren’t designing a framework intended for mass consumption, so we can implement support with full awareness of what not to do in conjunction with its use.

Adding Interface Support with a DataTemplateSelector

By creating a DataTemplateSelector and using it, we can effectively define data templates which target interfaces as opposed to only standard objects.

Most of the time when we create a DataTemplateSelector, we design it so it can be equipped with a finite number of different DataTemplates to be doled out at run time based on some business-specific logic. This selector is a bit different, as this selector is meant to offer an alternative way of selecting templates in general; thus, this selector must use all resources found in the tree of the container as well as in the current application’s resources as its source for DataTemplates.

This sort of activity should certainly seem to you to be one that carries potentially negative consequences for performance, as combing through the entire set of loaded resources can indeed be an intensive task. In order to alleviate these concerns, the wisest course of action would be to attempt to use (as much as possible) WPF’s own faculties for searching through all relevant resource dictionaries.

There are a number of methods that allow for searching for specific resources across all, however these are internal. But we don’t need to worry about that, because, like many things in life, simplicity is the key, and we can make use of a very simple and familiar method to achieve our objectives: FindResource. Many people are probably used to just using this method to find a resource by its literal string name; instead of the literal name, we’ll be searching using a specific type of ResourceKey.

Below is the code for the template selector:

/// <summary>
/// Provides a data template selector which honors data templates targeting interfaces implemented by the
/// data context.
/// </summary>
public sealed class InterfaceTemplateSelector : DataTemplateSelector
    /// <inheritdoc/>
    public override DataTemplate SelectTemplate(object item, DependencyObject container)
        FrameworkElement containerElement = container as FrameworkElement;

        if (null == item || null == containerElement)
            return base.SelectTemplate(item, container);

        Type itemType = item.GetType();

        IEnumerable<Type> dataTypes
            = Enumerable.Repeat(itemType, 1).Concat(itemType.GetInterfaces());

        DataTemplate template
            = dataTypes.Select(t => new DataTemplateKey(t))

        return template ?? base.SelectTemplate(item, container);


This will return the first DataTemplate encountered which targets one of the interfaces implemented by the item, with greater precedence given to templates that specifically target the concrete type of the item. If no templates are found, then the base selection logic will fire. The base selection logic should expand the search to include the rest of the object types in the item’s type hierarchy. Thus, the templates based on their targeted type returned, in order of precedence, is:

  1. Templates targeting the item’s specific type
  2. Templates targeting an interface implemented by the item
  3. Templates targeting other object types found in the item’s type hierarchy.

#2 and #3 may seem a bit unnatural, and that’s because they are. A superior solution would give greater precedence to object types found in the item’s type hierarchy if they too implement the same interfaces. If they do not, and the implementation of the interface is done somewhere in the hierarchy lower than the position of a given ancestor, then the template targeting that interface would take precedence.

Such efforts are largely unnecessary, however, as common sense would dictate that one should only use this selector isn’t meant to data template selection in its entirety; it should be used scenarios where interface is king.


Microsoft has recently released a preview version of their new Office 2013 product which adds support for their new touch screen platforms (while maintaining support for PCs).

Looks like anyone can grab a copy and try it out for themselves, if interested you can head on over to the official site. If you have a MSDN account, you can also find a download on there.

I decided to grab it myself to check out a few things, mainly:

  1. What the new interface looks like, and…
  2. If there were any breaking changes in their add-in hosting model and mechanisms.

Although I tend to only write  (hopefully) deep and useful software development articles, let’s make an exception to that pattern today with a little exposé on the new Outlook.

Inbox and Calendar: The New Look (Warning: High Contrast)…

As soon as Outlook loaded I somewhat felt like I was being assaulted with “white”. Man, there is a lot of white color on the new interface.

Any and all shades of color have been nuked and obliterated from the UI. Instead, we have some very solid colors, with blue being a prominent part of the new palette.

Outlook 2013's Ribbon

Outlook 2013's Ribbon

Yes, very white.

I actually like user interfaces with white backgrounds, however, the interfaces I make which use that color tend to be a bit simpler than the one Outlook uses. My immediate reaction to it is not a favorable one.

The inbox panel is also very white:

Outlook 2013's Inbox Panel

Outlook 2013's Inbox Panel

Sorry for the censorship. Regardless, I’d opt to have a bit more of a delineation drawn between the individual items; moreso here than in other places even.

The Navigation Pane has been updated as well; it is now on the very bottom, and it is now composed of words rather than pictures:

Outlook 2013's Navigation Bar

Outlook 2013's Navigation Bar

I do believe I like the calendar very much, however. Here’s a screenshot of everything with the calendar showing:

Outlook 2013's Calendar

Outlook 2013's Calendar

Very short summary: I think I like the look, but I can’t help but feel that the change to the interface initially invokes an overall feeling of shock when looking at it. A bit too white in areas, too; the blue bar at the bottom is kind of bugging me as well.

 Add-ins and Whatnot

All of my add-ins targeting previous version of Outlook loaded successful in Outlook 2013. Looks like they preserved the plugin model they had going before, although I’m sure some unannounced changes will surface eventually, as they always do with new releases of Outlook.

In the future, I will be writing articles covering any new development technologies I run across in Outlook 2013.


WPF affords developers the opportunity to create layouts that coincide greatly with how the look and behavior of a particular user interface was envisioned to be. The tricky part is, as always, knowing how to use the tools we’re given.

One common layout-related issue people run into is the layout of items displayed within an ItemsControl, specifically the space between the items. More often than not, items will not be displayed in a manner regarded as acceptable by the developer; some tweaking and customization is required to get what we want.

Sometimes we have very particular requirements set in how we want our final layouts to appear and behave. In this article, we’re going to look at an example featuring an ItemsControl, its children, and how we might be able to space it out so that it satisfies our goals.

I. The Scenario

In our application, we have the need to display a collection of customizable input fields at the bottom of our form. These input fields are defined by administrators managing the product, thus we are unaware of exactly how many of these input fields are going to end up appearing on our form.

Each input field consists of a self-describing label as well as an actual input element, such as a text box or a combo box. Although each field features two elements (label and input), it is collectively powered by a single piece of data. The following is an example of one of our input fields:

Example of a Displayed InputItem

Example of a Displayed InputItem

Appearances are important, so we have a number of requirements on how collections of these fields are laid out:

  1. Fields are laid out horizontally, but must wrap if they cannot fit on the screen
  2. Each field is spaced apart from the next and previous field in the collection by an amount dependent on the width of the window or root container. Fields should neither be too close nor too far apart, regardless of window size.
  3. Wrapped fields are aligned perfectly underneath their counterparts located above and/or below them (resulting in a grid-like layout).

If the window is large enough, the input fields should be laid out in only a single row, with proper spacing between each one, like so:

A Single Row of Input Fields.

A Single Row of Input Fields.

Notice how there is ample spacing between each item. Let’s see what we get when we constraint the width to the width shown above and add a few more fields:

Multiple Rows of Input Fields

Multiple Rows of Input Fields

See how everything is lined up in a grid-like fashion.

Now, we have no idea how many fields the customer will be adding, and we also don’t know how large their screens will be. So, our layout needs to be fluid in its shape in order to accommodate all these different possibilities. If we decrease the width of our window, we’ll have the following:

Multiple Rows of Input Items with Less Width Available

Multiple Rows of Input Items with Less Width Available

You’ll see how the number of columns shifted down to two. If you’re screen is wide enough, then you would get something like:

Multiple Rows of Input Items with Lots of Width Available

Multiple Rows of Input Items with Lots of Width Available

II. Pick Yer Poison (err, Panel)

Now that we have a good picture of what we want our layout to look like, we can proceed to choosing the particular panel to employ in our ItemControl‘s ItemsPanelTemplate.

Typically, when one wishes to achieve a “grid-like” layout, one would do well to make use of a Grid or UniformGrid. Certainly a UniformGrid might seem appropriate here, as is the typical choice when the goal is to evenly lay out items in a grid-like, evenly distributed fashion within an ItemsControl.

Unfortunately, that will not work for us here. We’re using WPF because it is dynamic dammit, and UniformGrid is a highly restrictive layout panel in that it requires the specification of the number of columns within its declaration.

Nay, a UniformGrid will not do! We need to adjust the number of columns based on the available width; in other words, we need to wrap appropriately. Therefore, the natural choice for our panel is the WrapPanel.

While the WrapPanel achieves our requirement of adjustment based on available width, it does not space out items and does not lay them out in a fashion that could be described anything close to “grid-like”. If we simply plop in a WrapPanel within our ItemsPanelTemplate, we’re going to get something like the following:

Ugly Input Field Layout

Ugly Input Field Layout

Dear Lord, that is hideous. But things are never pretty by accident (except organic life forms), and we are well on our way in getting the layout we want by making the WrapPanel our weapon of choice.

To start on the process of beautifying this pathetic creature, let’s talk briefly about the UI design behind each of the input fields.

III. Input Fields Be Stylin’

I’m going to leave up the exact design of these simple input fields as an exercise for the reader, for the most part.

In brief, the data powering each input field is responsible for indicating the type of exact input control which should be rendered, be it via a Boolean value or what have you.

A single Style is used which targets controls of the…Control variety. The name we’ll be using for this particular style is InputControlStyle. Based on our little indicator, it is wired to select the appropriate template. The various templates consists of a text block to display the descriptive text of the field, as well as a declaration of the particular type of input control which distinguishes it from the others.

So, while the actual design of the input field is arbitrary as far as we’re concerned, part of the solution for making our layout nice and pretty lies in their design. As was just witnessed in the previous section, using a WrapPanel gives us wrapping, but no grid alignment.

Luckily for us, children need not be wholly dependent on a common ancestral container in order to be collectively aligned in a grid-like manner. Instead, we can share size information between each of these fields so that we end up with just that.

We can share size information by taking advantage of the SharedSizeGroup feature made available by the DefinitionBase class.

Now, you’ve probably never heard of the DefinitionBase class before, and that’s just swell, but you’ve certainly toyed with its derivatives (e.g. ColumnDefinition, RowDefinition). These derivatives, however, only apply to Grid controls, thus we are going to need Grid controls somewhere in our setup. Going back to the template for each of the different kinds of input fields, we mentioned that each template will consists of two items. However, some sort of layout container will be needed to band the two elements together, and that’s where our Grid declarations will go.

In order to maintain our typical level of high standards and cleanliness, we’ll want to create a Style for these Grid controls so that all templates which provide a different type of input element can join the party. Assuming our magic Grid style is named InputGridStyle, an example template (one featuring the use of a text box) is as follows:

<ControlTemplate x:Key="TextBoxInputTemplate">
    <Grid Style="{DynamicResource InputGridStyle}">
          <TextBlock Style="{DynamicResource InputTextBlockStyle}"
          <TextBox Style="{DynamicResource InputTextBoxStyle}"

Template for Combo Box Input Fields

The styles being applied to the text block and box are arbitrary and should obviously satisfy any aesthetic and data binding objectives.

Any other template need only adhere to using a Grid control with the above style applied in order to appear correctly in our ItemsControl.

Now that you have an unbelievable understanding of how these input templates are organized and work, we’ll move on to the real magic show: the common Grid style in use by each of these templates.

As was mentioned previously, we need to tap into the SharedSizeGroup feature in order to achieve our objectives. The SharedSizeGroup property accepts a string describing the name of the group that the particular Grid should enter into. All Grid controls entered into the same group will then share size information with each other as well as That Which Contains Them (a bit Lovecraftian, I know).

But wait, didn’t we say that the SharedSizeGroup property belongs to DefinitionBase-based class, like ColumnDefinition or RowDefinition? Some of you may see this alone as being a problem in our quest to create style defining the shared size group membership information.

For those of you not in the fold: styling ColumnDefinition/RowDefinition is problematic because, well…you can’t style the Grid control’s ColumnDefinitions and RowDefinitions properties. That’s because neither of those properties are actually dependency properties. What’s more, the Grid control is not actually a control, in that it does not derive from Control, it derives from Panel, which itself derives directly from FrameworkElement. This means that making use of a ControlTemplate is not a possibility either.

Well (maybe unfortunately for you), in my solution to our layout problem, I do actually style the grid’s column definitions, but I do so by making use of an attached property that lets me do so. Creating such an attached property is outside the scope of this article, but you should be able to find plenty of examples online that will guide you in that task. If all else fails, just drop the style altogether and simply re-declare duplicate ColumnDefinitions in all of your templates, and pray no one ever makes a spelling error in the name.

Here is the style of our Grid common to all the templates (the “lib” namespace is a fictitiously named one that represents a custom UI framework where I keep such things as the attached properties we’re going to be using):

<Style x:Key="InputGridStyle" TargetType="{x:Type Grid}">
    <Setter Property="Margin" Value="0,0,0,5"/>
    <Setter Property="lib:GridProperties.ColumnDefinitions">
                <ColumnDefinition SharedSizeGroup="InputTextGroup"/>
                <ColumnDefinition SharedSizeGroup="InputValueGroup"/>

Style Used by Input Field Grids

Note: Although WPF ships with a ColumnDefinitionCollection type (which is where ColumnDefinition instances get stored), and even though it is public, it has no default constructor, thus we cannot make use of it in an XAML declaration. You’ll need to make one your self, all it needs to do is implement an existing collection class targeting the ColumnDefinition type for its items (ObservableCollection<ColumnDefinition>or whatnot).

That takes care of our input item styles.

IV. Turn on Size Sharing

Now that our items are set up to share size information, we need to enable size sharing on the containing WrapPanel itself. We can do this simply by setting the Grid.IsSharedSizeScope attached property to true in the WrapPanel declaration.

Taking everything we’ve done, we should have the following declaration for our ItemsControl this far:

<ItemsControl ItemsSource="{Binding YourData, UpdateSourceTrigger=PropertyChanged}">
            <WrapPanel Grid.IsSharedSizeScope="True"
            <ContentControl Style="{DynamicResource InputControlStyle}" />

Our ItemsControl Declaration Thus Far

OK! Great, let’s see what our ItemsControl looks like now (for our example, we only have enough width available for there to be two columns):

Better Looking Input Field Rows

Better Looking Input Field Rows

Looking much better! But…hmm…they are still a bit scrunched together. Remember that the x-Dimension of a WrapPanel is constrained to its content, much like the StackPanel. There is no feature made available by the WrapPanel class which will space things out nicely for us, instead we’re going to have to space our items out ourselves by adding margins to them.

So, alright…things look a bit scrunched starting where the first control ends and the beginning of the next control, and so on. Let’s then add a margin to the right of each item by amending our ItemTemplate with a Margin attribute like so:

<ContentControl Style="{DynamicResource InputControlStyle}"

Adding a Margin to Our ItemTemplate Declaration

Alright, not too hard. Let’s look at it now:

Two Columns of Input Fields Looking Mighty Fine

Two Columns of Input Fields Looking Mighty Fine

Alright. Sweetness! If we give our form some more width, it’ll fill up three columns as was shown in some of the earlier screenshots in this article, if we remove even more width, we’ll skinny on down to a single row even:

One Column of Some Mighty Fine Input Fields

One Column of Some Mighty Fine Input Fields

Looks like our work here is done.

Good journeys, all.

© 2012-2013 Matt Weber. All Rights Reserved. Terms of Use.