Reading a file in windows 8 CPP vs CSharp

I left my last blog very indecisive, would I use CPP, would I use .NET or would it be html/js.

Again I’m thinking Cpp is really for faster and better performance, and while it might even be the hands down winner on ARM architecture, I don’t expect to see any performance differences in the app I’m going to write.

I’m actually going to write the same application 3 times, and I’ll review my findings as I go along.

I’ll present the c++ and the c# apps here and the html/js will follow in the next blog post.

First up was the cpp. To be honest I did find this painful to write, the syntax is pretty convoluted. At least the markup for cpp is Silverlight so that was a no brainer.

<Grid x:Name="LayoutRoot" Background="#FF0C0C0C">
    <Button Content="Open" HorizontalAlignment="Left" 
         Height="4" Margin="84,45,0,0" VerticalAlignment="Top"
         Width="194" Click="Button_Click"/>
    <TextBlock HorizontalAlignment="Left" Height="381" 
        Margin="282,45,0,0" Text="TextBox" VerticalAlignment="Top" 
        Width="1065" x:Name="tb1"/>

I’ll even use the same markup for the C# application.

Now to the code


#include "pch.h"
#include "MainPage.xaml.h"
using namespace Windows::UI::Xaml;
using namespace Windows::UI::Xaml::Controls;
using namespace Windows::UI::Xaml::Data;
using namespace Windows::Storage;
using namespace Windows::Storage::Pickers;
using namespace Windows::Storage::Streams;
using namespace Windows::Foundation;
using namespace CppApplication17;
void CppApplication17::MainPage::Button_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
    auto openPicker = ref new FileOpenPicker();
    openPicker->SuggestedStartLocation = PickerLocationId::Desktop;
    auto pickOp = openPicker->PickSingleFileAsync();
    TextBlock^ content = tb1;
    pickOp->Completed = ref new AsyncOperationCompletedHandler<StorageFile^>(
    [content](IAsyncOperation<StorageFile^>^ operation)
        StorageFile^ file = operation->GetResults();
        if (file)
            //content->Text = file->FileName;
            auto openOp = file->OpenForReadAsync();
            openOp->Completed = ref new AsyncOperationCompletedHandler<IInputStream^>(
            [content, file](IAsyncOperation<IInputStream^>^ readOperation)
                auto stream = readOperation->GetResults();
                auto reader = ref new DataReader(stream);
                auto loadOp = reader->LoadAsync(file->Size);
                loadOp->Completed = ref new AsyncOperationCompletedHandler<unsigned int>(
                [content, reader](IAsyncOperation<unsigned int>^ bytesRead)
                    auto contentString = reader->ReadString(bytesRead->GetResults());
                    content->Text = contentString;



using System;
using Windows.Storage.Pickers;
using Windows.Storage.Streams;
using Windows.UI.Xaml;
namespace CSharpApp12
    partial class MainPage
        public MainPage()
        async private void Button_Click(object sender, RoutedEventArgs e)
            var openPicker = new FileOpenPicker();
            openPicker.SuggestedStartLocation = PickerLocationId.Desktop;
            var file = await openPicker.PickSingleFileAsync();
            if (file != null)
                uint size = (uint)file.Size;
                var inputStream = await file.OpenForReadAsync();
                var dataReader = new DataReader(inputStream);                
                tb1.Text = dataReader.ReadString(await dataReader.LoadAsync(size));                



Now I’m not going to explain every trivial detail, but’s here where I felt I c# won out.

  • C++ 11 lambda syntax is a bit clumbsy, I don’t like having to pass down my closure variables or having to make a local copy first
  • C++ intellisense is vastly inferior, to the point of being just painful. Lets be honest, tooling cannot be under estimated when it comes to productivity. (this is why I when I write Java I find that only since i started using IntelliJ has my speed really ramped up, it’s the right tool for my background.)
  • I’m fast at typing, but using . is a lot faster than –> for pointers.
  • The async await construct is just magical!, now, to those you who I’m sure will complain that I’m comparing apples with oranges, you have a bit of a moot point, in C++ I could have used the parallel patterns library to make it a little neater, but nowhere near as close to C#.

My next post I’ll rewrite the same application in html + js. I predict that the syntax is not that difficult but productivity is where I feel I may fall down… let’s see.. It promises to be interesting.

It’s COM Jim, but not as we know it!


Those of you that started out in windows c++ like me are likely familiar with COMPunch, COM+Ghost, DCOM Ninja
If you stayed in unmanaged land then you’ve probably still very familiar with, ATL, HResults etc.
However if on the other hand, like me, you progressed into the managed realm, then those icons above probably sum up your recollections.

For me I once considered myself pretty hot in C++ (shamefully I still do but I’m sure I’d have to spend a week hands on to really tick that box), COM collections on stl (ICollectionOnSTLImpl) were a walk in the park, multiple inheritance was a given and finding you didn’t release a COM reference was the highlight of your day. But, fast forward a few years and then you really scratch your head as to why life had to be so difficult.

Well I’ll answer that question, performance is by far and above one of the biggest factors. With Windows8 fast approaching you may be starting to panic a little, I guess even more so if you started you coding life in a managed kingdom, but fear not, and let me dispel some common misconceptions that are solved with the C++ Component Extensions (C++/CX for short)

  • COM means HRESULTS – No, C++/CX gives yields exceptions from the underlying Fail HResults.
  • COM means no returns – No, C++/CX allows return values
  • COM means reference counting – Kinda, but you don’t have to worry about AddRef and Release, you use the “ref new” keyword and C++/CX does the reference counting for you (not garbage collection!)
  • COM mens CoCreateInstance etc - Again C++/CX ref new takes care of this for you
  • COM means interfaces - C++/CX takes care of IUnknow/IDispatch, if fact IDispatch has been superseded.
  • COM means no inheritance - C++/CX takes care of this for you.

So will I develop my apps in C++/C#/JS+Html (come on don’t expect me to add VB.NET that battle was lost a long time ago Smile.

Well here’s my feelings:

  • C++ maybe, depends on how much pref i need from my machines (sacrificing time to market), if i want to use an existing library,  Parallel patterns library, C++ AMP etc.
  • C# yes, I like this language and it’s a RAD language (albeit i won’t have access to the full Framework)
  • JS+HTML, I’m not sold on this yet, maybe, if i want to produce for the web then I choose js+html+asp not silverlight, would I ever have enough of a code base to reuse on WinRT??… jury is out..

Visual Studio you rock.


I’m was not sure what I’d installed but tonight I needed to create a few regular expressions, and as i started typing this appeared in VS2010



Pretty cool if I say so myself.

A quick look at my extension manager and I see


Visual Studio you rock!

I’ve used quite a few IDE’s lately

  • Netbeans
  • Eclipse
  • IntelliJ (pretty good)
  • XCode 4.0

One thing is for certain, only IntelliJ comes close (but then the Refactor developers are pretty familiar with VStudio Nyah-Nyah

Disclaimer: I’ve been using visual studio since the mid 90ies so I’m truly biased.

Converting EPM operations to Tasks using the TPL


Previous post


The Event Programming Model (EPM from her on in) was introduced in .NET 2.0, it’s purpose was to serve as a simpler pattern for asynchronous operations than the Asynchronous Programming Model (APM / IAsyncResult, see my previous post on APM) where possible, mostly in UX code. Methods that use this pattern typically end in Async and have a Completion event.

The best known implementation of the EPM is the BackgroundWorker component, it’s got a distinct advantage in that it tries to use a synchronization context to fire the event on the thread from which it was called, the APM on the other hand offers no such guarantee.

Let’s see this in action (.net 4.0)


What you can see in the snippet above is a simple windows form (been a while my old friend) application. Let me paint you a picture, it’s early February 2012 and I’m stuck here at Brussels international airport, in the middle of a snow blizzard wondering if I’m going to have a flight home. The plane that will take me there is arriving in from Dublin so I’m looking at the live departures to see if it’s departed (already 15 mins late darn it.. ) anyway back to the post at hand; I’m downloading the page html with the call to DownloadStringAsync(), you can see in the completion event handler that I’m not doing any Invoking (dispatching to those of you that never had the pleasure of windows forms).

Now this is what it looks like after the event gets fired.


hey and looks like it’s running MS tech (notice that viewstate, incase the .aspx didn’t give it away!) nice! If you come from a web background this may not seem that odd to you, but if you started out desktop application development like me there was one golden rule you never forgot and that’s that always talk to the GUI in one thread and one thread only.

If the event handler wasn’t in the GUI thread above we would have received a cross thread exception like this:



Sadly TPL doesn’t handle the EPM as easily as the APM specifically in respect to the synchronization context, but lets see how we approach it, you may have to if you’re pre .net 4.0 as the DownloadStringAsync doesn’t exist!


With the code above we hit the cross-thread exception problem. We could do a Control.BeginInvoke (or Dispatcher.BeginInvoke in WPF), but lets imagine we were writing a library and we wanted it to be framework agnostic, how would we do this?

Actually it’s pretty simple, we just supply a context like this:


p.s. I got home at 4am Sad smile

Strategy pattern


So what is the strategy pattern? It’s one of the simplest object orientated design patterns, I find that it helps clean up day to day object orientated design. It’s purpose is to

  • Encapsulate a family of related algorithms such that they are callable through a common interface.
  • Independent evolution, algorithms can vary and evolve separately from classes using them.
  • Allow a class to serve a single purpose
  • Separates the calculation from the the delivery of it’s results. (separation of concerns)
How do we know when we should consider the strategy pattern?
  • Look for switch statements with possible common interface
  • Adding a new calculation to a class could break existing calculations (breaking the Open-Close principle, i.e. a class should be open for extension, but closed for modification.

UML – Strategy model



  • Strategies may not use class members from context
  • Tests may now be written for individual concrete strategies
  • Strategies may be mocked when testing the Context class
  • Adding a new Strategy does not modify the Context class

How to implement:

  • Class based
  • Functional programming approach with anonymous methods (Delegates and Funcs as opposed to new classes), I like this when the calculations are trivial
  • Property injection
  • Method strategy (passed to a method and not to the context class constructor)

Show me the code:

    Context class


    Here we see the strategy is getting passed to the context in the constructor, this class should be closed to modification, Trip is just an empty class for my demo and it’s not actually used in the calculation sample.

      Strategy interface

      Sample Strategy



      So, what is the strategy pattern again? It’s something you possibly do on a day to day basis and you don’t even realise it.

      e.g. If you write ASP.MVC code, you quite likely are passing interfaces to you controllers for dependency injection and testability ---> Strategy pattern.


    Converting APM operations to Tasks using the TPL


    Those of you have have already used .net 4.5 developer preview will know that tasks are becoming more common in the API, especially with the advent of the async await keywords.

    But many of you (including me) can’t really advocate .net 4.5 in the enterprise so what are our options should we like to use the TaskParallelLibrary?

    As you may be aware APM (Asynchronous Programming Model) was the original .NET mechanism for handling Async operations, it will be familiar to you as the IAsyncResult pattern.

    So lets take a common operation of reading from a stream, in .net 4.5 we already have a Stream.ReadAsync, but again what if we don’t have .net 4.5 at our disposal?

    The task parallel library helps bridge the gap with Task.Factory.FromAsync, here I place it in an extension method for ease of use.


    An invisible Azure Message


    When creating an Azure queue, you specify a lock duration, once a message is read from the queue it’s marked as invisible for other readers for a period of time, e.g. one minute.


    Choosing the invisibility time is a trade-off between expected processing time and application recovery time.

    When a message is dequeued, the application specifies the amount of time for which the message is invisible to workers dequeueing messages from the same queue. This time should be large enough to complete the operation specified by the queue message.

    If the timeout is too large, the time it takes to finish processing the message is affected when there are failures. For example, if the invisibility time is set at 30 minutes , and the application crashes after 10 minutes, the message will not have a chance of being started again for another 20 minutes.

    If the invisibility time is too small, the message may become visible when someone is still processing it. Thus, multiple workers could end up processing the same message, and one may not be able to delete the message from the queue (see the next section).

    The application could address this as follows

    1. If the amount of time to process a message is predictable, set the invisibility timeout large enough so that a message can be completed within that time.

    2. Sometimes the processing time for different types of messages may vary significantly. In that case, one can use separate queues for different types of messages, where messages in each queue take a similar amount of time to be processed. Appropriate invisibility timeout value can then be set to each queue.

    3. Furthermore, ensure that the operations performed on the messages are idempotent and resume-able. The following can be done to improve efficiency

    a. The processing should be stopped before the invisibility time is reached to avoid redundant work.

    b. The work for a message can be done in small chunks, where a small invisibility time may be sufficient. In this way, the next time the work is picked up from the queue after it becomes visible again, the work can be resumed from where it is left off.

    4. Finally, if the message invisibility time is too short and too many dequeued messages are becoming visible before they can be deleted, applications may want to dynamically change the invisibility time that is being set for new messages put onto the queues. This could be detected by counting at the worker roles how many times message deletes are failing due to messages becoming visible. Then based on a threshold communicate that back to the front-end web roles, so they can increase the invisibility time for new messages put into the queue if the invisibility time needs to be tuned.

    Manage the invisibility on the fly

    The “Update Message” REST API is used to extend the lease period (aka visibility timeout) and/or update the message content. A worker that is processing a message can now determine the extra processing time it needs based on the content of a message. The lease period, specified in seconds, must be >= 0 and is relative to the current time. 0 makes the message visible at that time in the queue as a candidate for processing. The maximum value for lease period is 7 days. Note, when updating the visibility timeout it can go beyond the expiry time (or time to live) that is defined when the message was added to the queue. But the expiry time will take precedence and the message will be deleted at that time from the queue.

    Azure Service Bus


    When communicating between roles in an Azure application we’ve a few options; to name a few:

    • Http
    • Tcp
    • Queues

    While Http and Tcp are tried and trusted they do come with some limitations that queues help overcome.

    In the last few months Microsoft have released pub/sub service bus to the world. This is similar to a basic queue, in the basic queue, each message is consumed by an individual consumer, but with subscription topics, multiple clients can consume the same message, each subscription logically maintains its own queue of messages.





    The diagram above shows a typical communication between worker roles and web roles on the Azure platform.

    As previously stated, this decoupling has several advantages over direct messaging.

    Load Leveling

    In the system the load can vary over time, where the amount of effort in processing the mid-tier business logic remains somewhat constant, with the queue in place it’s only necessary to have enough servers to handle the average load irrelevant of peak load. This can save money in terms of infrastructure required to handle peak load.

    Temporal Decoupling

    With queues decoupling the messaging effectively making the messaging async, publishers and subscriber need not be online at the same time, the service bus reliably stored the messages in the queue until the subscriber pulls them off and processes them. This allows different roles to be taken offline for maintenance etc.

    Load Balancing

    As load increases more worker roles can be added to service the queue (e.g. an online toy shop around the Christmas period). The system ensures that only one worker role will process the message, also in given that the worker roles are pulling the messages off the queue, they don’t have to be running on the same infrastructure, (Azure favours multiple low power roles in comparison the fewer higher powered roles).


    Migrate SqlServer DB to Azure Sql


    Here’s one way to migrate your SqlServer Database to the Azure platform.

    1) Get the SQL Azure Migration Wizard


    2) Start the wizard and select SQL Database Migrate option


    3) Select your source database


    4) Choose the objects you wish to migrate (all in my case)



    5) See the results and review the SQL Script if necessary.


    6) Now we need Sql Azure in the cloud for the next part, log into your account (get a 3 month free trial if you don’t have one)

    Select your Azure Server and create a new database.


    7) You’ll be prompted to select where you want your server located if you don’t already have one.




    8) Add some rules to your database, you’ll need to do this to allow access for MS Services and Visual Studio



    9) So now that you have a database in the cloud you’ll need to continue with your migration wizard by selecting this database.






    10) That’s pretty much it. Hope these screenshots helps someone out.

    Azure Tools


    This evening I decided I’d install the new Azure tools after watching the latest vids that have appeared.

    I right click on my MVC3 app and choose to: Add Windows Azure Deployment Project



    Then I hit F5 to run the project and I get an error

    Microsoft Visual Studio Unable to find file DFUI.exe  Baring teeth smile



    In the 1.5 SDK there used to be a registry key that pointed to the emulator, with 1.6 this no longer exists and Visual Studio is looking for the dfui.exe in a different location (use Process Monitor from to tell you where)


    Once you find where Visual is looking for it, it’s a matter of copying the files in
    C:\Program Files\Windows Azure Emulator\emulator\ to this location.

    Try run you app now and it should work.