On the flip side of the cloud-coin, we have the Azure .NET Services platform (hell, there’s more than two sides to Azure, but those two are the ones I feel are most noteworthy as of now).

The concept and purpose for the Azure .NET Services platform is rather easy to grasp: It allows you the opportunity to host Windows Communication Foundation-based services on the cloud. Achieving this requires the use of the multiple components that make up the Azure .NET Services platform, such as the Access Control and Services Bus services. Using these services will provide you with a secure intermediary interconnect you can use to provide endpoints for client and server. This allows the developer the ability to spend more time on the service itself, not the additional logic required in dealing with various network topologies.

Much of the Azure .NET Services platform is based on the WCF core libraries, but the use of these assemblies alone are not enough to get the services hosted and working the cloud. There are several Azure .NET Services-specific assemblies that you must become familiar with and use in order to get your service out there. In this article I’m going to go over the use of the Azure .NET Services assemblies, as well as take a general look at how the services work as a whole.

Much like my article on the SQL Data Services end of the cloud, this marks the beginning of the series through which I’ll be developing a library that will be used to get the most out of this service Microsoft is offering.

Continue reading »

 

If you’re developing with the TFS API, you may find yourself in a situation where you need to retrieve a list of all users’ names who belong to a certain Team Project.

This is demonstrated in an application I’m about to post here (not quite done yet), but until that happens, this post will illustrate how to go about doing this.

Continue reading »

 

As one might be able to tell from reading previous posts of mine, I’m strongly in favor of using WiX to author MSI’s. Perhaps it’s my background in UNIX that leads me to feel this way (where doing things by hand is commonplace), but I’d sooner believe that it has more to do with my well placed mis-trust in any sort of tool that generates code or binaries solely through a GUI.

It wouldn’t be such a big deal if it weren’t for the extremely high level of importance one must attribute to the correct operation of the installation media. Your application may crash once or twice in its lifetime, and while that’s something we’d never want to see, it isn’t the end of the world. Initial installations that fail, however, are serious, since they give the user a terrible first impression of the program, among other things.

In this very short article, I’m going to illustrate a fatal mistake an inexperienced developer can easily make when authoring an MSI with WiX. This mistake can result in files from the install scattered across multiple hard drives. Fixing this undesired behavior is not required to publish an MSI, and it can easily go unnoticed during testing, therefore it is noteworthy.

Continue reading »

 

Due to loss of old server, binaries, pictures, and source is not available at the moment. Links are stripped until they are put back online.

Microsoft’s cloud computing framework does not begin and end with Windows Azure; rather, it only one of several components. Whereas Windows Azure exists at the lowest level of Azure, the SQL, .NET, Live, and other services exist at the top. The cloud services provided by these technologies are more powerful in their specific domain than services created with Windows Azure.

By learning how to use the Azure SQL Services, I’ve finally come to see how this, and other cloud computing technology, can be of great use. It really is sort of a thrill to be able to host data without having to worry the slightest about setting up and maintaining a server.

We will start off our analysis of the various Azure Services by examining the SQL Data Services. I’m going to go into how they work and how you can get them work for you. I am also introducing a library (work in progress) that provides commonly used operations in the creation and manipulation of a SQL cloud service. Finally, I will demonstrate all of this through an example application I’ve crafted that utilizes the Azure SQL Services.

Introduction

SQL Data Services (SDS for short) are essentially data storage and query processing utility services. You may remember that Windows Azure has access to storage services itself: the Blob, Queue, and Table services. These are not the same as the SQL Data Services; an important difference being that SDS can work with relational data, while Windows Azure’s storage services do not.

The more basic storage model of SDS is termed the ACE (Authorities, Containers, and Entities). This model allows the use of CRUD (create, read, update, delete) operations. More advanced storage models allow us to utilize relational features such as TOP and JOIN. The SOAP and REST protocols are used for communicating with SDS endpoints.

I’m going to go over how to access SDS using the ACE model, with example code illustrating the process provided. You will soon see that a lot of code is required to accomplish minor things; I will demonstrate how you can avoid this burden by using my Cloud.SQLServices library.

Before you can start accessing SDS via SOAP, you need to have received an invitation code to Microsoft’s beta. So, unlike Windows Azure, where you could create services and test them locally, you need access to the cloud if you want to play with SDS.

And, before we get into how to “do stuff”, we will take a closer look at the SDS data model, as understanding it is essential in order to get even the most basic tasks complete.

The Data Model

If you are coming from a SQL background well, good for you, but you are going to find that there are some immediate barriers to your data storage and querying needs. While the potential for relational SQL queries is indeed present here, you first need to learn how the data is actually stored out there on the cloud. As mentioned before, SDS uses the ACE model; thus, we will go over each component that makes up the ACE model.

Authority

The authority exists at the top of the whole containment hierarchy. When I say containment hierarchy, I’m referring to the data structures responsible for organizing and containing your data. The authority that you create is represented by a DNS name. For example: the SDS cloud service address is https://database.windows.net. If you create an authority with an ID of “myauthority”, then the DNS name that points to that authority ends up being https://myauthority.database.windows.net.

The naming of the authority has several restrictions that you must keep in mind: you cannot have any uppercase characters, it must be unique, and I’d be hard pressed to believe that it can contain any special characters; numbers are fine, however.

Container

The container exists below the authority. An authority can contain more than one container. Therefore an authority can be termed as a collection of containers. The container is what actually stores your data, or entities. The container does not define the data’s schema, which is nice, as you can then just simply create a container and worry about the schema when you are creating the entities themselves.

When performing a search or update, the container is the largest domain possible in the scope. You cannot do a search or update query across multiple containers; however, it appears that once Azure is out of beta, that you may be able to do just that.

Entity

An entity is what is stored, along with other entities, inside a container. Entities represent meaningful data within the SDS containment hierarchy by having any number of user-defined properties and property values. These properties are able to be added, updated, or deleted from a retrieved entity after it has been committed to the cloud.

Entities can be blob or non-blob entities; this is a more advanced topic that will be covered in a future article.

All of the above components of the ACE model can be created via the SOAP or REST protocols. Through these protocols, we can perform basic CRUD operations on them. We’ll look at how this is done next.

Defining your Containment Hierarchy and Basic CRUD Operations

As I’ve mentioned before, you can use the SOAP or REST protocols to define and manipulate your containment hierarchy. Microsoft does not provide any sort of “client” library for using the REST protocol; however, they do provide a SOAP service client accessible via a client proxy. My SQLServices library provides you with both a REST and SOAP client. The library also takes care of setting up the client proxy for you, which allows you to avoid having to explicitly reference any services.

With that out of the way, let’s go over the creation of the various containment hierarchy components. Following that, I will introduce an example application which demonstrates how we can use the Azure SQL Services, and how it can be done with my cloud SQLServices library.

The REST protocol will be used mainly in demonstrating the following concepts. The reason I’m doing this is because it allows you to understand better what is actually going on, since Microsoft requires you to hand craft your own XML when using the REST protocol. When we’re looking at the sample program, I’ll demonstrate these concepts using my library.

Create an Authority

Creating an authority requires either creating a web request with a REST XML fragment payload, or utilizing the Create method in Microsoft’s SitkaSoapServiceClient class.

The XML fragment will appear as such:


<s:Authority xmlns:s='http://schemas.microsoft.com/sitka/2008/03/'>

    <s:Id>proto-omni</s:Id>

</s:Authority>

The Id metadata property is required, and it describes the name of the authority that we wish to create. After you have created this XML fragment, you need to send a HTTP or HTTPS request to the cloud service using the POST method. The cloud service then will return a response which will indicate whether the request was successful or not.

One important thing to note is that the URI to be used as the HTTP request should be the actual cloud service URI. The URI’s used when creating other components of the hierarchy will differ from this one, and will reflect their rank on the containment hierarchy.

I’m assuming that the reader has some basic competency in regards to C# and programming in general, thus, I’m not going to go over how you get the above XML to the cloud service.

Create a Container

Creating a container again requires creating the appropriate REST XML payload and then sending it to the cloud via an HTTP/S request using the POST method. An important difference to note here is that the URI this time will be the authority’s URI, not the base cloud service’s URI. This should make sense, as the container exists underneath the authority.

The XML fragment for creating a container is as follows:


<s:Container xmlns:s='http://schemas.microsoft.com/sitka/2008/03/'>

    <s:Id>CloudGuestbook</s:Id>

</s:Container>

After this is generated, it will have to be POSTed to the authority’s URI.

You can do all this with Microsoft’s SitkaSoapServiceClient class by creating the appropriate Container object and then using the Create method with a Scope object initialized and AuthorityId properly set.

Create an Entity

Creating an Entity involves generating the appropriate XML payload and the POSTing it to the container’s URI. A sample XML fragment is provided below:

<Entry xmlns:s='http://schemas.microsoft.com/sitka/2008/03/'
    xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
    xmlns:x='http://www.w3.org/2001/XMLSchema' >
    <s:Id>Entry1</s:Id>
    <date xsi:type='x:dateTime'>11/29/2008 11:04:44</date>
    <name xsi:type='x:string'>Matt Weber</name>
    <comments xsi:type='x:string'>This is my cloud guestbook entry.</comments>
</Entry>

The above fragment describes an entity of type “Entry” with various user-defined properties, such as date and name. This can be inserted into your cloud storage by a POST to the container URI.

This can be done through the SitkaSoapServiceClient by creating the appropriate Entity with the user-defined properties added to the Properties dictionary.

I will demonstrate how to do all the above with my library in a bit.

Querying Entities

Now that you have some data on the cloud, you can run some queries against them by making a HTTP/S request to your container’s URI using the GET method. The specific URI that you will use is your container URI plus a “q” request parameter with the value being that of your query. Something similar to:

https://proto-omni.database.windows.net/v1/Entries?q=”from e in entities select e”

The above query will return all of the entities stored in the container, which you then can manipulate and do whatever with.

The Cloud Guestbook and the Omniscientist.Cloud.SQLServices Library

Now that all the introductory material has been dealt with, I’m going to show you an example of an application making use of the cloud services. Keeping in line with the idea that all example programs should be laughable, I present to you: The Cloud Guestbook.

What it does is very simple: It fetches all “guestbook entries” stored in the cloud, and also allows you to sign your name to guestbook, which in essence inserts a new entity with properties containing data such as the date, your name, and your comments.

The application was made using the Omniscientist.Cloud.SQLServices library, so I thought it would be best that I walk you through how the application works. This way we can see how the library makes using Azure SQL Services easy.

The first thing that happens when the guestbook loads, is it queries the container and authority I put aside for use with the CloudGuestbook for all guestbook entries. The classes provided by Microsoft only work with the SOAP protocol, and only entities are returned when they are queried for.

My library allows you to pass in an external type when creating and querying entities, and it will automatically handle the entity conversion for you, regardless of whether you are using the REST or SOAP protocol. I think it is pretty neat; you can look through the code to see how that particular feature works.

The following code is executed in order to get all entities in the container using the REST protocol:

public static IEnumerable<GuestbookEntry> GetEntries()
{
    RestClient<GuestbookEntry> restClient = new RestClient<GuestbookEntry>
                    {
                        Authority = new Authority("proto-omni"),
                        Container = new Container("CloudGuestbook")
                    };

    return restClient.SearchEntities(null);
}

The above code is using the RestClientclass, available from the Omniscientist.Cloud.SQLServices library. The GuestbookEntry type being used here is another class that contains properties that we want the resulting entity to reflect in its body. Specifically, a DateTime property for the date, a string property for the name, and a string property for the comments. The way my library is able to construct entities based on generic type information greatly eases the process of working with SQL Services.

After the above code is executed, the returned entities are then bound to the guestbook’s list view.

So, the interesting thing here is that all of this data is coming to your from the cloud; specifically, at: https://proto-omni.data.database.windows.net/v1/CloudGuestbook

The other area of interest here is when you add an entry to the guestbook. This is done by clicking on the “Sign” button, filling in your name and comment, and then hitting OK.

The program then inserts the new entity into the cloud, and then refreshes the list view with the new entry. For the purposes of demonstration, this aspect of the program is done using the SOAP protocol. The following code is from the section of code that handles the signing:

SoapClient<GuestbookEntry> client = new SoapClient<GuestbookEntry>
                {
                    Authority = new Authority("proto-omni"),
                    Container = new Container("CloudGuestbook")
                };

soapClient.SoapErrored += ((sender1, evArgs) =>
                {
                    lblError.Content = evArgs.Message;
                    lblError.Visibility = Visibility.Visible;
                });

GuestbookEntry newEntry = new GuestbookEntry(DateTime.Now, sign.Named, sign.Comments);

client.InsertEntity(
    Entity.FromTypeDefinition(newEntry, "entry" + (lvEntries.Items.Count + 1)));

((ObjectDataProvider)gridGuestbook.FindResource("guestbookEntries")).Refresh();

Once again, the above code makes use of the Omniscientist.Cloud.SQLServices library, this time using the SoapClientclass instead. This makes use of the SOAP protocol instead of the REST protocol.

Please feel free to download the program and add an entry.

You can download the CloudGuestbook binary here.

You can download the Omniscientist.Cloud.SQLServices library binary here.

You can find the source code for the Cloud.SQLServices library, as well as the CloudGuestbook in my code repository located here.

A specific source link for the Cloud.SQLServices library is here.

And here is a specific source link for the CloudGuestbook source.

I will continue to work on the SQLServices library, as additional advanced features need to be implemented. Regardless of this, I believe it provides a pretty strong base to work from.

I also intend on providing documentation for the library, as the above examples only cover a little bit of what we can do with it. This will be in the form of Sandcastle generated web pages. Additional coverage will also be included in all subsequent articles I do on SQL Services with Azure.

This concludes our introductory look at the SQL Services component of the Azure Services Platform. My next article dealing with SQL Services will deal with more advanced topics such as blobs, additional relational operations, and concurrency.

© 2012-2013 Matt Weber. All Rights Reserved. Terms of Use.