I’ve moved!


Starting right now, this blog has been moved to http://blog.rikp.co.uk. All original posts and comments have moved, too – so nothing is lost. And I’ll maintain this old one for a while.

Most old links to blog.richard.parker.name/ should redirect to the new article at blog.rikp.co.uk/ without issue, but if you find one that doesn’t work, please let me know.

If you subscribed to this blog, your subscription will no longer be valid – sorry. I couldn’t see any way to migrate subscribers. There’s a subscription facility you can use on the new site, though.

Cheers everyone!


Reading Traffic Info V1.2 for Windows Phone 8 now out!


My regular readers will know that a few weeks ago, I released an app called “Reading Traffic Info” to the Windows Phone 8 store (http://bit.ly/readingtrafficapp). In a nutshell, the app helps those who live in or around, or commute in or through Reading Borough, by connecting them to a near real-time feed of all of the borough’s traffic cameras.

I want to thank everyone who has downloaded the app so far, I’m surprised to have hit 466 downloads in just a few weeks for such a niche app. Proof that there must be demand for apps providing access to this information!

What’s new in version 1.2?

Firstly, I’ve added a ton of new cameras. Here’s the full list:

A329 (M) TVP
A33 Bennet Road
A33 Little Sea
A33 Relief Road
A33 Rose Kiln Lane
A4 Langley Hill
Castle Hill
Gosbrook Road
Grovelands Road
Henley Road (Lower)
Kings Road
M4 Jn 11
M4 Jn 11 Westbound
Queens Road
Whitley Street
Winnersh Crossroads

In addition, the app will now automatically check with my server every day to see if new cameras are available and include them in the camera list without the need to update the entire application. This works for corrections/alterations to existing camera metadata, too (for example, to correct latitude/longitude pairs or orientation/naming data).

Version 1.2 also has an updated user interface which sports a bit more colour, and the spacing on the camera listing has been increased further still to make selecting them a little easier:

2e08eace-1952-4ace-97fa-42f0c0eb5c53 759188ef-c956-4a12-b3ba-292de6417a51 e2a172d5-e580-4068-8088-c1f69154b9ef

I also added icons to denote favourite cameras, as well as a new button to report camera inaccuracies directly within the app (it’ll open up your email client with a pre-populated body detailing the camera you’re looking at, with a space for you to tell me what the problem is).

Remote Camera connection quality tolerance

Version 1.2 includes additional tolerances for the quality and reliability of the camera feeds operated by the council. If a camera is offline, or the council’s camera server went offline (as it did about a week ago), the app will now indicate that there has been a problem connecting to the camera. Additionally, I fixed a bug where the app would continually try to refresh an image from a camera every 5 seconds, regardless of whether it had successfully connected or not. To help mitigate image loading delays on slower mobile network connections, the refresh time has increased from 5 seconds to 10 seconds. While the latest image is loading, if a previous one was available it will remain on-screen.

I also fixed a problem with the camera images ‘flickering’ between refreshes.

Finally, additional tolerances were added into the app to detect the state of your phone’s internet connection and warn you when it is unavailable.

Continued thanks to everyone

Again, thanks to all those who have supported me by downloading and using the app, submitting feedback or helping me with the design itself.

What’s next?

There are lots of features on the horizon and I’m very much planning to continue developing the app in my spare time. Already on the cards is support for providing up-to-date car park status within the app (so you can decide which car park to head to in order to avoid the jams!) and also road works status.

If you’d like to make suggestions, see what’s planned or vote on new features, head over to my UserVoice community at http://rikp.uservoice.com.


Autonomous Immersive Quadcopter – Build Log


It’s been a while since my last post back in December (all about Windows Azure, you can read about that here) and a lot has been going on in the interim. I’ve mainly been focused (as always) on work, but in my down time, I’ve been working on somethin’ a little special.

I’ve always been fascinated by things that fly: aeroplanes, helicopters, birds, bees, even hot air balloons and solar wings. Flight gives us an opportunity to view things from a different perspective; it opens a new world to explore.

The trouble is, as a human, I was born without the (native) ability to fly. And that’s always made me a little, well, sad.

A couple of years ago, I started toying with a model aeroplane and my goal at that point was to turn that into a UAV, like so many of the projects I’d seen online. I ended up dismissing the idea, for a couple of reasons: planes are pretty limited (manoeuvrability-wise), and unless you can fly yours incredibly high and incredibly fast, you’re a little limited to the types of cool things you can do. Plus, the open-source autopilots that are currently available are mainly all built using non-Microsoft technologies, and, being a “Microsoft guy”, I wanted to do something about that (let’s say it’s just for selfish purposes: I’m much more productive using Microsoft technologies than I am with something like C on the Arduino platform, and I have very limited time for this project).

So I’ve been working on building a custom quadcopter since January, and I’m very pleased with the results so far. It flies, and in this video you’ll see the first test flight. Toward the end, just before the ‘aerobatics’, I disable the automatic flight stabilisation remotely, which causes even more aerobatics. Anyway, the quadcopter was fun to build, and was a huge learning curve for me: and I really enjoyed the challenge of having to figure out all the flight dynamics, propeller equations, lift calculations and of course, the designing and building of the frame, electrical and radio systems.

But it’s not awesome enough yet, not anywhere near it! In fact, check out some of the plans:

  1. I’m currently building a three-axis motorised gimbal that will fit underneath the main airframe. It is going to be connected to an Oculus Rift virtual reality stereoscopic headset, which will relay movements of the wearer’s head to the servos on the gimbal; thus enabling you to ‘sit’ and experience flight from within the virtual cockpit. My colleague, Rob G, is currently building the most awesome piece of software to power the Oculus’ dual stereoscopic displays, while I finish designing and building the mount and video transmission system.
  2. Cloud Powered AutoPilot and Flight Command. That’s right: using Windows Azure, I will provide command and control functionality using Service Bus and sophisticated sensor logging through Azure Mobile Services. Flight data and video will be recorded and shared real-time with authenticated users. Why? There’s nothing cooler than Windows Azure, except maybe something that flies around actually in the clouds, powered by the Cloud!

I don’t know where this project will end up taking me, but so far it’s taken me on some very interesting journeys. I’ve had to learn much more about:

  • Circuit design
  • Fluid dynamics
  • Thrust and vector calculations
  • Power system design
  • Radio-control systems (on various frequencies: 5.8GHz, 2.4GHz, 433MHz and 968MHz) and the joys of trying to tame RF energy using antennae
  • Soldering

… The list goes on!

Current Activity

I’m already in the progress of building a sensor network on the quadcopter. This comprises of:

  • 5 x Ultrasonic range finders (four mounted on each of the motor arms, one downward-facing)
  • 1 x Barometric pressure (for altitude and airspeed sensing, using pitot tubes)
  • 1 x 66-satellite GPS tracking module
  • 1 x Triple-axis accelerometer
  • 1 x Triple-axis gyroscope

The current plan is to use a Netduino to interface directly with the sensors, and transform all of the sensor data into a proprietary messaging standard, which will be emitted via the I2C interface using a message bus architecture. In this way, the Netduino is able to create ‘virtual’ sensors, too, such as:

  • Altitude (based on either the downward-facing ultrasonic sensor, or the barometric pressure sensor; whenever the quad moves out of range of the ultrasonic sensor)
  • Bearing
  • Velocity (allowing selection of air/ground speed)

The Netduino is an amazing device, however it doesn’t have sufficient capacity or processing power on-board to interface with the radio control receiver (which receives pitch, roll, yaw, throttle and other inputs from my handheld transmitter). For this activity, I’m going to use a Raspberry Pi (running Mono!). The RPi apparently features the ability to turn GPIO pins into PWM-capable pins (either generating, or interpreting), which is exactly what I need.  The Netduino will output the sensor data to the RPi, which will be running the ‘autopilot’ system (more on the planned autopilot modes in a later post).

It’ll be the job of the Raspberry Pi to interpret sensor data, listen to commands it has received from the handheld transmitter on the ground, and decide what action to take, based on the requested input and the sensor data, and the currently-selected autopilot mode. Sounds simple, but it’s anything but!

If you’re not interested in the technical details, you can follow this project on hoverboard.io. Thanks for reading.

Managing Risk on Windows Azure


The Windows Azure cloud platform is a solid, highly available and scalable environment, but like any system (on-premise, or in the cloud) there are risks which threaten the desired operation of your application.

With Windows Azure comes the opportunity to vastly lower your infrastructure costs, fluidly manage your system architecture and switch on and pay for extra capacity only when you need it. A successful implementation of your application on Windows Azure will present your developers with a scalable set of fault-tolerant globally distributed services that will enhance productivity, increase reliability and empower your business to react quicker and more cheaply to changing needs than ever before.

Over the past 18 months in my role at Microsoft, I’ve met countless customers who are considering, or in the process of, a move to Windows Azure, but in 99% of cases, most folks feel that they can just take their existing software and put it in the cloud: and, you can. In many cases, there’s nothing stopping you doing that. Unfortunately, though, there are a few myths I have to dispel for you:

  • Myth #1: The cloud is invincible.
  • Myth #2: I just write the code – it’s then fire, and forget.
  • Myth #3: Any decent cloud platform worth its weight will deal with failure for me.

We’ll explore why these are myths later in the article, but for now, I’d like you to ask yourself why it is you are moving (or have moved) to Windows Azure: was it the cost savings? Many people start their move to Azure based on a conversation focused on cost. In my personal experience, eight in every ten customers who move to Windows Azure and make only the minimum required code changes to have it operate in Azure (where changes are needed) are missing a trick, and within six to twelve months, engage in additional development work to take advantage of other services on offer. I think of it as somewhat like buying an expensive sports car, and only ever using the first two gears and driving at 30 MPH. And who does that?

To sweeten the deal, I’m going to share my number one tip with you for getting the most out of your venture to the Windows Azure platform. In fact, it’s such a golden tip, that it doesn’t matter where you use it, it will yield value. Cloud or on-premise, Windows Azure or elsewhere. Embed it within your development practices and watch your team build the most reliable software you’ve ever seen.

Myth #1: The cloud is invincible.

Allow me to present the notion that the most robust systems are those that are hardened against risk, in much the same way sensitive equipment is often shielded from harmful interference. It may be that your application – under normal operating conditions – will never suffer from interference  or data corruption. After all, you employ smart developers and pair program.

But what happens when a hard drive fails? Or a network call to a remote database fails?

We tend to think of these things as ‘exceptions’; most of the error conditions we will refer to in this article will surface as an exception in your code. But on Windows Azure, as in any other cloud platform, you have to keep in mind that the economies of scale come in to effect: your code and your database sit on top of a few drives across hundreds of thousands. Networks are (securely) shared with others, and it is impossible to guarantee that every electron that travels across the wire will arrive in sequence or even at all.

Myth #2: I just write the code. It’s fire and forget.

It helps to think of Windows Azure as a collection of discrete services (building blocks) that cooperate to provide, at the base level, high scale, high performance and highly-available geo-redundant network connectivity: virtual machines and persistent storage in various forms glued together with an extremely efficient, intelligent and almost invincible routing layer that abstracts developers away from the complexities of all of this, and exposes everything through a set of familiar, managed APIs.

As developers, when we target Windows Azure, we’re targeting a rich set of capabilities that were probably not part of your application’s original design specifications. Triple-redundant and geo-redundant storage, anyone? An elastic load-balancer? While it’s true the many of the more basic building blocks of Windows Azure (including storage and the load balancing, even the virtual machine capabilities) are available to you, often without any requirement to modify your application, it is useful to understand that there is a rich ecosystem of additional services which you can and should leverage not only to offer enhanced functionality within your application, but also to help the cloud platform keep your app running: to toughen it against risk.

Myth #3: Any decent cloud platform worth its weight will deal with failure for me.

It is incredibly rewarding, professionally and personally, to watch an application you designed support it’s users successfully and do it’s job right every day, for years. Let me just say that I think it can be even more rewarding (not just professionally or personally, but to your business stakeholders) to watch it do that no matter what is happening on the underlying platform. Imagine a platform on which your application can often mask many would-be catastrophic failures to an on-premise datacenter completely from your end-users, by intelligently deploying pre-programmed mitigations to forecast risks. Wouldn’t it be great if you had a guide to help you figure out what those are?

Azure mitigates many of the base-level risks for us (a disk failure, a virtual machine failure, switch hardware failure etc), essentially for free, just by adopting Windows Azure. But there are other things we need to think about, too. For example, many of our risks will be mitigated within the boundaries of a single datacenter. But what if it were to suffer a catastrophic failure? An Act of God? Or what if we simply wanted to bypass it because routing connectivity to it was slow?

These are things the cloud platform should not provide for us unless we tell it how. Each mitigation will have some kind of implication, whether it is financial or in terms of the functionality you are able to provide during the failure condition. Yet, many customers I have worked with have a somewhat romantic notion that “the cloud should just do it all for me!”

It’s all about risk.

In my experience in working with all of these customers, what these questions boil down to fundamentally is a conversation about risk. Specifically, understanding what those are, which are mitigated for you, and which you have to think about and deal with yourself. Once we think in these terms, that we as developers have to share this responsibility in order to cash-in on the ‘promise’ of an invincible cloud platform, that’s when the platform magic actually happens.

So, here’s the first golden tip for you:

Hardware fails. Networks fail. Memory fails. Your code needs to be hardened against as many of these risks as possible, and you need a mitigation strategy for all of them. Windows Azure will make it very easy for you to detect and prevent many of these risks (even the ‘catastrophic’ ones) from bringing your business to it’s knees.

So, design for failure.

Identifying the risks and understanding & categorising the effects

“Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed). Potential losses themselves may also be called risks”.1

This definition hints at the necessity to both understand that there is the potential for risk in any situation and that the outcome of any given situation may be influenced (otherwise, it is a certainty) in some way so as to be able to lessen or prevent the effect from being noticeable  In this section, we will identify what the risks are and, what the effect of each risk manifesting itself is.

Integral to your deployment on Windows Azure should be an understanding of:

  • What the risks are;
  • What steps can be taken to mitigate the effect of the risk surfacing;
  • What category the effect falls within.

For example, when you buy a car, you know that there is a risk that it might get damaged, either by you (racing around again!), or by another road user. Assuming you’re a law abiding citizen, you’ll buy insurance to mitigate against the risk of damage to your car, or somebody else’s. But within your policy document will be a list of expectations around what happens when your car is damaged: you’ll be told how long your car will be unavailable, whether you’ll have the use of a rental car, etc.

It is the same for deployments on Windows Azure, except this time we’re not talking about the effects to your car, rather the effect to your business caused by risk actually becoming a reality (or, ‘surfacing’).

I’ve often found that the effects of the risk (the effect the risk has on your app once it has manifested) can generally be categorised according to the following scale (in order of descending severity):

  • Catastrophic: there is nothing that can be done to mitigate the effect to normal operation;
  • Fault: with careful planning and development work, a suitable mitigation can be automatically implemented to prevent the effect from surfacing;
  • Avoidable: the effect can be avoided with a trivial amount of effort.

In this discussion, I’m assuming that the primary risk we’re attempting to mitigate is downtime caused by loss of connectivity to the data centre. In my example deployment, we’re talking about a simple web application with two web roles, two worker roles and a dependency on a database on SQL Azure. If we dig further, our full risk register may look similar to the following:

Risk Manifestation Effect Severity
Instance taken offline for patching/maintenance, where only one instance of that role is deployed Your app goes offline. Catastrophic
Instance taken offline for patching/maintenance, where two or more instances of the role are deployed Potential for increased load on remaining instances; but otherwise no disruption to service. Avoidable
Instance (in a multi-instance deployment) goes offline due to failure of the instance itself As above. Avoidable
Connectivity failure to a dependant resource in the data centre The resource is unavailable for the duration of the disruption to connectivity. Fault
Failure of the dependent resource Potential of data loss. The resource is unavailable until it is recovered either manually or automatically. Fault
Total loss of inbound and/or outbound connectivity to the data centre Your app goes offline. Catastrophic
Catastrophic loss of the data centre Your app goes offline. Catastrophic

Only once both your technical team and your business leaders are aware of the risks, their manifested effect and what can technically be achieved to mitigate them, can a discussion about the extent to which you wish to implement these measures take place. Try and avoid the tendency of shooting for 100% availability across 100% of your dependant resources and remember that often, different parts of an app can tolerate different failures differently! Understand that risks also have a field of impact, too. For example, a catastrophic data centre failure would affect the whole of your app, whereas the failure of a database would impact only those sections which require connectivity to it.

Crucial to this discussion is having an open and honest discussion with the business, and with your customers, about what level of risk is acceptable to them. This will determine how much effort goes into your risk avoidance strategy. You need to understand what level of risk is acceptable.

On Windows Azure, one significant advantage is that the cost of maintaining a highly available, highly scalable solution that is both maintained and secure is generally orders of magnitude cheaper than the equivalent private, on-premise set up. The last thing you’d want to do is erode that saving by planning and deploying avoidance techniques that are completely over the top: so be reasonable with your understanding of acceptable risk.

This exercise may seem academic and fairly obvious but it is often overlooked for that reason. Without it, though, it is difficult to fully appreciate what steps are necessary, and to inform your UX designers properly about the types of scenarios that could naturally occur that you may well need to surface in your app to let your users know.

We’ve covered risk, now let’s turn our attention to what we need to do should the worst happen: a risk has manifested and the effect has begun.


It’s a common misconception that disaster recovery and risk mitigation are the same things.

‘Disaster recovery’ refers to the things you do (either automatically or as part of a manual activity) that restore you to your normal scenario; for example, something exceptional occurred and you have suffered a catastrophic event and need to get back to ‘business as usual’ as fast as possible, while minimising loss. Risk mitigation, on the other hand, is about the things you can do before a condition occurs that triggers your failure scenario.

So that you can do this effectively, you need to first understand what risk has surfaced, what your recovery options are for that particular risk, and therefore what your recovery strategy and objectives actually are.

Let’s put this into context:

Your app went offline due to a failure of a database connection. The effect was that users of your app could no longer publish new content. There are potentially two recovery options available to you here: you could either write new content to a separate store temporarily and automatically update the failed database when it becomes available, or your other recovery option is to simply wait until the failed database is available again. Your strategy for recovery from this particular risk is therefore directly dependent on what your business expects you to be able to achieve in this scenario.

Putting it all together…

I’ve introduced the notion that risks are no less likely to occur on the Windows Azure platform than on-premise, and we know that Azure is capable of recovering from most of these risks without any input from you. What we’re trying to look at here is what steps you can take as developers to stop any non-catastrophic effects from impacting your app, causing a ‘failure scenario’. If you embrace the concept of expecting failure, it becomes quite easy to see what you must do in order to maintain normal operation during a failure situation. In general, remember you can:

  • Use alternative persistent storage should a database become unavailable, and re-synch when available;
  • Continue retrying a failed connection until it succeeds or ‘defer failure’ until after a certain number of retries;

When designing for high availability, it is a good idea to keep these questions in mind:

  • Prevention: what can you do to stop the risks you’ve anticipated from occurring?
  • Detection: how will you detect that your app is no longer in it’s ‘normal state’?
  • Recovery: what can your app do to either temporarily mask the failure condition and maintain the appearance that everything is OK, or what steps must take place to get things back to normal operation?

Do not rely on the availability guarantee: it isn’t enough (a 100% up-time guarantee wouldn’t be, either) and remember, availability is only one part of the equation. If we go back to the car insurance metaphor, you don’t just buy car insurance to mitigate against the risk of injury or damage to yourself or to others: you also drive safely and obey traffic rules. So it’s actually more about adopting a philosophy and taking a series of actions that is important.

In summary, Windows Azure is and will remain a highly available, stable and reliable cloud platform and it will continue to be enhanced and improved over time. As developers though, we have to appreciate that failures of course can, and do, occur. Every object is subject to entropy, and hard disks, network cables and switches are no exception. Understanding that there are parts of the availability equation that you can – and should – take responsibility for is essential to a healthy cloud deployment and arguably, even if your app is deployed on-premise, you might want to consider adopting ‘cloud risk principles’, too!

My point ultimately is that risk isn’t a problem: not knowing what they are, what the cloud platform is responsible for mitigating, and how you can efficiently deploy platform services to assist you is.

Microsoft’s Premier Support for Developers team is able to provide your developers with specific, technical and process guidance to help you mitigate risks to your business as you move to Windows Azure and short cut your time-to-market.

2011 in review


The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.

I didn’t quite get around to blogging as much as I’d have liked during 2011, but 2012 is a new year and I’ll have a lot more things to blog about, including my new job and the discoveries we make. So, go on – subscribe; you know you want to!

Here’s an excerpt:

The concert hall at the Syndey Opera House holds 2,700 people. This blog was viewed about 22,000 times in 2011. If it were a concert at Sydney Opera House, it would take about 8 sold-out performances for that many people to see it.

Click here to see the complete report.

Email Templates in C#


For what must have been the zillionth time last weekend, I found myself writing code again for Y.A.T.E.S. (or, “Yet Another Template Email Sender”). I don’t know why I didn’t get around to adding some snippets into my library sooner, but I thought I’d share the following as I finally decided to write something which is a good starting point for future expansion.

Basically, it fetches any file from disk (you specify), reads the contents into a variable and then parses it for a list of tokens you supply, substituting the tokens with your own values. It works equally well with HTML and plain-text emails and supports multiple CC and BCC addresses.

EDIT: July 2011 – The project has now been updated to support fetching of template files from remote locations (by URL).


I’ve tried to keep it as simple and short as possible:

// Fetch template body from disk
var template = TemplateHelper.GetEmailTemplate("D:\Path\File.htm");

// Add any tokens you want to find/replace within your template file
var tokens = new Dictionary<string, string> {{"##FIRSTNAME##", "Richard"}, {"##LASTNAME##", "Parker"}};

// Specify addresses (CC and BCC are optional)
var to = new MailAddress("some_email_address@some_domain.com");
var fr = new MailAddress("some_email_sender@some_domain.com");

// Optionally, specify a List<MailAddress> for both CC and BCC fields, or pass null.
var cc = new List<MailAddress>() {new MailAddress("bar@foo.com"), new MailAddress("foo@bar.com")};
var bcc = new List<MailAddress>() {new MailAddress("fizz@buzz.com")};

// Send the mail
TemplateHelper.Send(to, fr, cc, bcc, "##FIRSTNAME##, thanks for registering!", tokens, template, true);

It’s free to use, you can use it as you wish, and it comes with all the usual disclaimers etc.

To get the source code, head on over to the CodePlex project at tokenmail.codeplex.com.


Upgrading the Arduinometer: Introducing the Netduinometer!


Back in early 2010, I announced the beginning of my open-source “Arduinometer” project, and released the code and schematics to build your own. This year, I’ll be upgrading the Arduinometer (running on the Arduino platform) to the Netduino: running the Microsoft .NET Micro Framework and is also open-source.

The Netduino Board

The Netduino Plus offers on-board LAN as standard, and together with a much richer toolset (the Visual Studio 2010 environment is far superior to the Arduino environment) I am planning to include the following additional features:

  • Support for up to 16 metered devices
  • Compatibility with pulse-output, photoreflective and magnetic counters
  • Web-based administrative interface
  • EEML output (for Pachube, and others etc.)

This project is still very much in the planning phase so I am keen to hear suggestions and your feedback before I get too stuck in. So, if there’s anything you think would be a particularly good idea, please get in touch. If you’d like to get involved a little further and sink your teeth into writing some code, drop me an email or leave a comment on this post and we’ll see about setting you up with access to the repository.

Happy metering!