Managing Risk on Windows Azure

Standard

The Windows Azure cloud platform is a solid, highly available and scalable environment, but like any system (on-premise, or in the cloud) there are risks which threaten the desired operation of your application.

With Windows Azure comes the opportunity to vastly lower your infrastructure costs, fluidly manage your system architecture and switch on and pay for extra capacity only when you need it. A successful implementation of your application on Windows Azure will present your developers with a scalable set of fault-tolerant globally distributed services that will enhance productivity, increase reliability and empower your business to react quicker and more cheaply to changing needs than ever before.

Over the past 18 months in my role at Microsoft, I’ve met countless customers who are considering, or in the process of, a move to Windows Azure, but in 99% of cases, most folks feel that they can just take their existing software and put it in the cloud: and, you can. In many cases, there’s nothing stopping you doing that. Unfortunately, though, there are a few myths I have to dispel for you:

  • Myth #1: The cloud is invincible.
  • Myth #2: I just write the code – it’s then fire, and forget.
  • Myth #3: Any decent cloud platform worth its weight will deal with failure for me.

We’ll explore why these are myths later in the article, but for now, I’d like you to ask yourself why it is you are moving (or have moved) to Windows Azure: was it the cost savings? Many people start their move to Azure based on a conversation focused on cost. In my personal experience, eight in every ten customers who move to Windows Azure and make only the minimum required code changes to have it operate in Azure (where changes are needed) are missing a trick, and within six to twelve months, engage in additional development work to take advantage of other services on offer. I think of it as somewhat like buying an expensive sports car, and only ever using the first two gears and driving at 30 MPH. And who does that?

To sweeten the deal, I’m going to share my number one tip with you for getting the most out of your venture to the Windows Azure platform. In fact, it’s such a golden tip, that it doesn’t matter where you use it, it will yield value. Cloud or on-premise, Windows Azure or elsewhere. Embed it within your development practices and watch your team build the most reliable software you’ve ever seen.

Myth #1: The cloud is invincible.

Allow me to present the notion that the most robust systems are those that are hardened against risk, in much the same way sensitive equipment is often shielded from harmful interference. It may be that your application – under normal operating conditions – will never suffer from interference  or data corruption. After all, you employ smart developers and pair program.

But what happens when a hard drive fails? Or a network call to a remote database fails?

We tend to think of these things as ‘exceptions’; most of the error conditions we will refer to in this article will surface as an exception in your code. But on Windows Azure, as in any other cloud platform, you have to keep in mind that the economies of scale come in to effect: your code and your database sit on top of a few drives across hundreds of thousands. Networks are (securely) shared with others, and it is impossible to guarantee that every electron that travels across the wire will arrive in sequence or even at all.

Myth #2: I just write the code. It’s fire and forget.

It helps to think of Windows Azure as a collection of discrete services (building blocks) that cooperate to provide, at the base level, high scale, high performance and highly-available geo-redundant network connectivity: virtual machines and persistent storage in various forms glued together with an extremely efficient, intelligent and almost invincible routing layer that abstracts developers away from the complexities of all of this, and exposes everything through a set of familiar, managed APIs.

As developers, when we target Windows Azure, we’re targeting a rich set of capabilities that were probably not part of your application’s original design specifications. Triple-redundant and geo-redundant storage, anyone? An elastic load-balancer? While it’s true the many of the more basic building blocks of Windows Azure (including storage and the load balancing, even the virtual machine capabilities) are available to you, often without any requirement to modify your application, it is useful to understand that there is a rich ecosystem of additional services which you can and should leverage not only to offer enhanced functionality within your application, but also to help the cloud platform keep your app running: to toughen it against risk.

Myth #3: Any decent cloud platform worth its weight will deal with failure for me.

It is incredibly rewarding, professionally and personally, to watch an application you designed support it’s users successfully and do it’s job right every day, for years. Let me just say that I think it can be even more rewarding (not just professionally or personally, but to your business stakeholders) to watch it do that no matter what is happening on the underlying platform. Imagine a platform on which your application can often mask many would-be catastrophic failures to an on-premise datacenter completely from your end-users, by intelligently deploying pre-programmed mitigations to forecast risks. Wouldn’t it be great if you had a guide to help you figure out what those are?

Azure mitigates many of the base-level risks for us (a disk failure, a virtual machine failure, switch hardware failure etc), essentially for free, just by adopting Windows Azure. But there are other things we need to think about, too. For example, many of our risks will be mitigated within the boundaries of a single datacenter. But what if it were to suffer a catastrophic failure? An Act of God? Or what if we simply wanted to bypass it because routing connectivity to it was slow?

These are things the cloud platform should not provide for us unless we tell it how. Each mitigation will have some kind of implication, whether it is financial or in terms of the functionality you are able to provide during the failure condition. Yet, many customers I have worked with have a somewhat romantic notion that “the cloud should just do it all for me!”

It’s all about risk.

In my experience in working with all of these customers, what these questions boil down to fundamentally is a conversation about risk. Specifically, understanding what those are, which are mitigated for you, and which you have to think about and deal with yourself. Once we think in these terms, that we as developers have to share this responsibility in order to cash-in on the ‘promise’ of an invincible cloud platform, that’s when the platform magic actually happens.

So, here’s the first golden tip for you:

Hardware fails. Networks fail. Memory fails. Your code needs to be hardened against as many of these risks as possible, and you need a mitigation strategy for all of them. Windows Azure will make it very easy for you to detect and prevent many of these risks (even the ‘catastrophic’ ones) from bringing your business to it’s knees.

So, design for failure.

Identifying the risks and understanding & categorising the effects

“Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed). Potential losses themselves may also be called risks”.1

This definition hints at the necessity to both understand that there is the potential for risk in any situation and that the outcome of any given situation may be influenced (otherwise, it is a certainty) in some way so as to be able to lessen or prevent the effect from being noticeable  In this section, we will identify what the risks are and, what the effect of each risk manifesting itself is.

Integral to your deployment on Windows Azure should be an understanding of:

  • What the risks are;
  • What steps can be taken to mitigate the effect of the risk surfacing;
  • What category the effect falls within.

For example, when you buy a car, you know that there is a risk that it might get damaged, either by you (racing around again!), or by another road user. Assuming you’re a law abiding citizen, you’ll buy insurance to mitigate against the risk of damage to your car, or somebody else’s. But within your policy document will be a list of expectations around what happens when your car is damaged: you’ll be told how long your car will be unavailable, whether you’ll have the use of a rental car, etc.

It is the same for deployments on Windows Azure, except this time we’re not talking about the effects to your car, rather the effect to your business caused by risk actually becoming a reality (or, ‘surfacing’).

I’ve often found that the effects of the risk (the effect the risk has on your app once it has manifested) can generally be categorised according to the following scale (in order of descending severity):

  • Catastrophic: there is nothing that can be done to mitigate the effect to normal operation;
  • Fault: with careful planning and development work, a suitable mitigation can be automatically implemented to prevent the effect from surfacing;
  • Avoidable: the effect can be avoided with a trivial amount of effort.

In this discussion, I’m assuming that the primary risk we’re attempting to mitigate is downtime caused by loss of connectivity to the data centre. In my example deployment, we’re talking about a simple web application with two web roles, two worker roles and a dependency on a database on SQL Azure. If we dig further, our full risk register may look similar to the following:

 
Risk Manifestation Effect Severity
Instance taken offline for patching/maintenance, where only one instance of that role is deployed Your app goes offline. Catastrophic
Instance taken offline for patching/maintenance, where two or more instances of the role are deployed Potential for increased load on remaining instances; but otherwise no disruption to service. Avoidable
Instance (in a multi-instance deployment) goes offline due to failure of the instance itself As above. Avoidable
Connectivity failure to a dependant resource in the data centre The resource is unavailable for the duration of the disruption to connectivity. Fault
Failure of the dependent resource Potential of data loss. The resource is unavailable until it is recovered either manually or automatically. Fault
Total loss of inbound and/or outbound connectivity to the data centre Your app goes offline. Catastrophic
Catastrophic loss of the data centre Your app goes offline. Catastrophic

Only once both your technical team and your business leaders are aware of the risks, their manifested effect and what can technically be achieved to mitigate them, can a discussion about the extent to which you wish to implement these measures take place. Try and avoid the tendency of shooting for 100% availability across 100% of your dependant resources and remember that often, different parts of an app can tolerate different failures differently! Understand that risks also have a field of impact, too. For example, a catastrophic data centre failure would affect the whole of your app, whereas the failure of a database would impact only those sections which require connectivity to it.

Crucial to this discussion is having an open and honest discussion with the business, and with your customers, about what level of risk is acceptable to them. This will determine how much effort goes into your risk avoidance strategy. You need to understand what level of risk is acceptable.

On Windows Azure, one significant advantage is that the cost of maintaining a highly available, highly scalable solution that is both maintained and secure is generally orders of magnitude cheaper than the equivalent private, on-premise set up. The last thing you’d want to do is erode that saving by planning and deploying avoidance techniques that are completely over the top: so be reasonable with your understanding of acceptable risk.

This exercise may seem academic and fairly obvious but it is often overlooked for that reason. Without it, though, it is difficult to fully appreciate what steps are necessary, and to inform your UX designers properly about the types of scenarios that could naturally occur that you may well need to surface in your app to let your users know.

We’ve covered risk, now let’s turn our attention to what we need to do should the worst happen: a risk has manifested and the effect has begun.

Recovery

It’s a common misconception that disaster recovery and risk mitigation are the same things.

‘Disaster recovery’ refers to the things you do (either automatically or as part of a manual activity) that restore you to your normal scenario; for example, something exceptional occurred and you have suffered a catastrophic event and need to get back to ‘business as usual’ as fast as possible, while minimising loss. Risk mitigation, on the other hand, is about the things you can do before a condition occurs that triggers your failure scenario.

So that you can do this effectively, you need to first understand what risk has surfaced, what your recovery options are for that particular risk, and therefore what your recovery strategy and objectives actually are.

Let’s put this into context:

Your app went offline due to a failure of a database connection. The effect was that users of your app could no longer publish new content. There are potentially two recovery options available to you here: you could either write new content to a separate store temporarily and automatically update the failed database when it becomes available, or your other recovery option is to simply wait until the failed database is available again. Your strategy for recovery from this particular risk is therefore directly dependent on what your business expects you to be able to achieve in this scenario.

Putting it all together…

I’ve introduced the notion that risks are no less likely to occur on the Windows Azure platform than on-premise, and we know that Azure is capable of recovering from most of these risks without any input from you. What we’re trying to look at here is what steps you can take as developers to stop any non-catastrophic effects from impacting your app, causing a ‘failure scenario’. If you embrace the concept of expecting failure, it becomes quite easy to see what you must do in order to maintain normal operation during a failure situation. In general, remember you can:

  • Use alternative persistent storage should a database become unavailable, and re-synch when available;
  • Continue retrying a failed connection until it succeeds or ‘defer failure’ until after a certain number of retries;

When designing for high availability, it is a good idea to keep these questions in mind:

  • Prevention: what can you do to stop the risks you’ve anticipated from occurring?
  • Detection: how will you detect that your app is no longer in it’s ‘normal state’?
  • Recovery: what can your app do to either temporarily mask the failure condition and maintain the appearance that everything is OK, or what steps must take place to get things back to normal operation?

Do not rely on the availability guarantee: it isn’t enough (a 100% up-time guarantee wouldn’t be, either) and remember, availability is only one part of the equation. If we go back to the car insurance metaphor, you don’t just buy car insurance to mitigate against the risk of injury or damage to yourself or to others: you also drive safely and obey traffic rules. So it’s actually more about adopting a philosophy and taking a series of actions that is important.

In summary, Windows Azure is and will remain a highly available, stable and reliable cloud platform and it will continue to be enhanced and improved over time. As developers though, we have to appreciate that failures of course can, and do, occur. Every object is subject to entropy, and hard disks, network cables and switches are no exception. Understanding that there are parts of the availability equation that you can – and should – take responsibility for is essential to a healthy cloud deployment and arguably, even if your app is deployed on-premise, you might want to consider adopting ‘cloud risk principles’, too!

My point ultimately is that risk isn’t a problem: not knowing what they are, what the cloud platform is responsible for mitigating, and how you can efficiently deploy platform services to assist you is.

Microsoft’s Premier Support for Developers team is able to provide your developers with specific, technical and process guidance to help you mitigate risks to your business as you move to Windows Azure and short cut your time-to-market.

Remote Debugging a Windows 8 RT app on Surface with BT Infinity & HomeHub 3.0

Standard

I ran across an interesting problem today, and I thought I’d blog about it as it may save you some time if you encounter a similar issue in the future.

Scenario: you’ve deployed Visual Studio 2012 Update 1 Remote Debugging Tools to your Surface RT device, and running Visual Studio 2012 Update 1 on your desktop PC (x64, in my case). When you attempt to remote debug on Surface, Visual Studio 2012 reports that it cannot connect to MSVSMON.exe on the remote device.

Background: for testing purposes, I disabled the firewalls on both the Surface and the desktop PC, and I tried configuring MSVSMON.exe to work with and without authentication on port 4016. Visual Studio 2012 Update 1 could never discover the Surface, either, unless I ran MSVSMON.exe as a service on the Surface. As a service, my developer machine could discover the Surface but even then, still couldn’t connect.

For reference, the developer machine was connected via ethernet, and the Surface (obviously) via WiFi to the same router.

Ping from the desktop to Surface failed, but it did resolve the IP address. Ping from the Surface back to the desktop always worked, returning an IPv4 address.

After trying many things for several hours, I tried changing my router because I believed what I was seeing was symptomatic of a networking issue. This immediately cured the problem.

It would seem, at least in my case, that my BT HomeHub 3.0 prevented establishing a connection to MSVSMON.exe between LAN and WiFi. I don’t know why – I can only assume perhaps there is a firmware issue on the HomeHub 3.0.

I can’t verify it with another HomeHub as I don’t have access to a replacement router, however swapping it out for a brand new Netgear DGND3700 did the trick nicely. If you have a HomeHub 3.0 and are on BT Infinity, please let me know if you can reproduce this issue.

Brand vs. Identity: who are we?

Standard

I did something apparently controversial today. I sent a mail out to our team asking if I could spend some time with each of them over the next month to learn how they sell our service to our customers; how they describe it to others and how they evangelise it. And that was my mistake. I framed it wrong.

You see, our service is many things to many people. It has to be – it’s part of the value we bring to the table. For example, consultants are traditionally either technical or non-technical. It’s rare to find someone who is a good blend of both, and at the right times. Rarer still is to find a whole team of them together and arm them with the some of the best technologies and the right kind of ethos.

I was actually a little shocked at some of the responses I got back. Most were positive, but some are clearly designed to probe deeper: why do I want to know this? Is it aligned to other activities? How does it relate to those – does it replace them? The tone is defensive and it saddens me (perhaps though, I have misunderstood) that people may not see the value in defining something more clearly – if not to the outside world but at least to ourselves.

We all do fantastic work and I want to make sure that we do, each of us, share the best bits of what we do and capture those. I suspect we’ll see many common themes occurring but I hope we’ll be pleasantly surprised at some of the ‘secret ingenuity’ that I’m sure goes on.

Our identity

Whenever we are engaged in conversations internally or externally about what it is we do, we invariably start off by listening. We try to understand what is important to our audience and figure out if we have something that we can offer along those needs. Sounds sensible, right? And I think it absolutely is.

What I’m interested in, though, is how those folks who are not on my team – folks who we task with making initial introductions – how do those guys view us? What do they think we do? How much do they understand? Are they good evangelists for our service?

And that, right there, is our identity. Not what we want it to be. Well, if we are lucky – it might be! But ultimately I think another team member actually summed it up beautifully:

Your brand is what you want. Your identity is determined by others.

And this ties in neatly with my belief that there is actually a stark difference between ‘brand’ and ‘identity’ and the two are not to be confused with ‘brand identity’ which is an altogether separate beast invented by marketing people.

So we need to work on our identity – and if it is given to us by others, I’d like to influence them in the most positive way possible to align that identity with our brand.

And that’s why I believe a conversation, with everyone on the team, is absolutely critical to defining what that is. Every one of us will have a perspective and a view on what it is we do and what the magic of our service actually is. And although I’m driving this activity, I’m certainly not arrogant enough to believe I can answer such a complex question myself.

Diversity

Part of the problem is our diversity and flexibility to work across the key stakeholders both vertically and horizontally.

But there’s strength in our diversity: we cover a range of technologies, and we have a broad range of expertise, each of us bringing our own unique experiences from our careers and hobbies and interests into the mix. There’s also strength in the team – nobody knows everything, but between us we are very likely have some of the best subject matter experts in the field and importantly, we network with other areas of the business to find the answers if we don’t have them.

So the question I am asking is, what is our brand and identity and more importantly, how do we communicate the value of something which is so many things to so many people? It might be that the output of this exercise is actually the discussion itself – the conversation and the thoughts and the emotion that goes into it will in and of itself, I think, be a valuable task. But it might also be true that the results can be fed back into all these existing activities to enhance and complement them – double win!

First impressions

We obviously do a good job of this today. Our business is growing. Our team is strong and getting stronger. But I wonder how much of this is ‘by default’ or ‘by design’? We rely a lot on a member of our team to ‘get in through the door’ and then start making headway to scope out requirements. I think that’s absolutely fine, and of course it is a requirement when discussions start to progress. But is it enough to expect someone to be able to do a decent job of promoting what we do and recognising if there’s potential by simply telling them that they should ‘call us in at the first sign’? No, I don’t think it is.

And here’s why I think that.

At that moment – at that very first introduction to the notion of working with us, a potential customer is forming an opinion. They’re forming a first impression. And most of that first impression is going to come to them from someone who doesn’t, in all probability, understand our diverse and complex service well enough to be able to communicate it’s benefits clearly and succinctly.

So we lose momentum at that very point. And I want everyone to know that we work in a great team; we have some fantastic people and I know, having worked on this team, that we deliver brilliant and lasting change into the customers we work with. I am excited about that. It drives me to get out of bed every day because I love my job.

And I just don’t think that other resources perhaps ‘get that’ – and maybe they can’t. Maybe they never will. And if that’s the case, I still don’t think there’s any harm in doing what I suggested which is taking a good look at how we ourselves describe our service. What is our key message?

I know our team has done a lot of great work defining cornerstones and pillars and guidelines and processes and what not. There’s even, I’m sure, some brilliant marketing collateral. But there’s no substitute for watching someone enthusiastic describe something to you because you catch a bit of that. And the best introductions for potential are when a customer sees that enthusiasm, understands what might be doable and are inspired to come talk to you about it.

Because in my mind, that’s a win – and that’s where the real value conversation starts.

Where next?

‘Sales’ and ‘pre-sales’ is one such obvious application for a better, cleaner and neater description of what it is we do. Of course it is – clear communication is key. But it is not the only purpose for having such a discussion and I shudder at the thought that it becomes labelled as a sales-related activity.

It’s actually a business development activity, I think. And that’s where I went wrong in my email – or at least, one of the places Smile

Some of my team have – rightly – focused on the output, but unfortunately immediately; asking what are the deliverables? Where do they apply? What is the delivery mechanism? These are great questions – but for me, at this point, the value of this exercise is in the conversation and the understanding. This is arguably such a fundamental task that to focus on the output would skew the discussion; the fact of observation altering the outcome.

If we can’t talk to each other – or explain to a colleague in passing what it is we do – then perhaps we have a little more work to do!

Unattended installation of SQL Server 2008 R2 Express on an Azure role

Standard

In certain circumstances, you might find yourself with a need to install SQL Server Express on one of your Windows Azure worker roles. Exercise caution here though folks: this is not a supported design pattern (remember, a restart of your role instance will cause all data to be lost).

It was however exactly what I needed for my scenario and I thought I’d share it in case it serves a purpose for you.

There are a couple of approaches you can take, of course, one of which is ‘startup tasks’ specified in the service definition files. However, these offered me limited configuration options because I needed to customise some of the command line arguments being passed to the installer based on values from the Role Environment itself.

The trickiest part was actually figuring out the correct command line parameters for SQL Server 2008 R2 Express, which to be honest wasn’t that fiddly at all. Here are the parameters you’ll need:

/Q/ACTION=Install/FEATURES=SQLEngine,Tools /INSTANCENAME=YourInstanceName
/HIDECONSOLE /NPENABLED=1 /TCPENABLED=1 /SQLSVCACCOUNT=”.\YourServiceAccount\” /SQLSVCPASSWORD=”YourServicePassword” /SQLSYSADMINACCOUNTS=”\.\ADMINACCOUNT” /IACCEPTSQLSERVERLICENSETERM S/INSTALLSQLDATADIR=”FullyQualifiedPathToFolder”

In the parameters above, we’re specifying a silentinstall with the /Qparameter, installing the SQL Database Engine and Management Tools (basic) with the /FEATURESparameter, setting the instance name, enabling named pipes and TCP, while setting service accounts and specifying the SQL data directory.

The next part then, is to actually build this as a command line and execute it in the cloud environment. How do we do this? Simples: we use System.Diagnostics to create a new Process()object and pass in a ProcessStartInfoobject as a parameter:

var taskInfo=new ProcessStartInfo
{
FileName=file,
Arguments=args,
Verb="runas",
UseShellExecute=false,
RedirectStandardOutput=true,
RedirectStandardError=true,
CreateNoWindow=false
};
//Start the process
_process=new Process(){StartInfo=taskInfo,EnableRaisingEvents=true};

For good measure, we’ll also redirect the standard and error output streams from the process so that we can capture those out to our log files:

//Log output
DataReceivedEventHandler outputHandler=(s,e)=>Trace.TraceInformation(e.Data);
DataReceivedEventHandler errorHandler=(s,e)=>Trace.TraceInformation(e.Data);

//Attach handlers
_process.ErrorDataReceived+=errorHandler;
_process.OutputDataReceived+=outputHandler;

Then, we’ll execute our task and ask the role to wait for it to complete before continuing with startup:

//Start process
_process.Start();
_process.BeginErrorReadLine();
_process.BeginOutputReadLine();

// Wait for the task to complete before continuing...
_process.WaitForExit();

Stick all of that into a method that you can re-use, and don’t forget to add parameters called fileand args(strings) that contain the path to the SQL Server Express installation executable and the command line arguments you want to pass in.

How to build your command line argument

If you’re wondering why I didn’t hardcode my command line options, it’s because up in Azure, the standard builds for web and worker roles don’t come preloaded with any administrative accounts – you have to specify those during design time. I actually ‘borrow’ the username of the Remote Desktop user (which is provisioned as an administrator for you when you ask to enable Remote Desktop).

I actually end-up with this quick-and-dirty snippet:

string file=Path.Combine(UnpackPath,"SQLEXPRWT_x64_ENU.exe");
string args=string.Format("/Q/ACTION=Install/FEATURES=SQLEngine,TOOLS/INSTANCENAME={2}/HIDECONSOLE/NPENABLED=1/TCPENABLED=1/SQLSVCACCOUNT=\".\\{0}\"/SQLSVCPASSWORD=\"{1}\"/SQLSYSADMINACCOUNTS=\".\\{0}\"/IACCEPTSQLSERVERLICENSETERMS/INSTALLSQLDATADIR=\"{3}\"", username,password,instanceName,dataDir);

So, ultimately, you’ll then want to wrap all of this up in to your role’s OnStart() method. Include a check to see whether SQL Express is already installed, too.

And, if you’re stuck trying to debug what’s going on with your otherwise silent installation, SQL Server Setup Logs are your friend. You’ll find them by connecting to your role via Remote Desktop and opening the following path:

%programfiles%\Microsoft SQL Server\100\Setup Bootstrap\Log\

Enjoy!

2011 in review

Standard

The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.

I didn’t quite get around to blogging as much as I’d have liked during 2011, but 2012 is a new year and I’ll have a lot more things to blog about, including my new job and the discoveries we make. So, go on – subscribe; you know you want to!

Here’s an excerpt:

The concert hall at the Syndey Opera House holds 2,700 people. This blog was viewed about 22,000 times in 2011. If it were a concert at Sydney Opera House, it would take about 8 sold-out performances for that many people to see it.

Click here to see the complete report.

DDD Southwest 3 – Review of my presentation

Standard

Slides everywhere, but not a coherent flow in sight! 🙂

Way back on 11th June 2011, I was lucky enough to be invited to present my session – “Getting Started in .NET” – at the DDD Southwest 3 conference. I remember thinking, “gosh, I’d really love to speak at one of these events but I missed the deadline for submitting sessions”. So, I pinged an email over to Guy Smith-Ferrier and asked him if they needed any help, thinking maybe they’d want room monitors or other volunteers to ferry folks around. As it turned-out, Guy actually still had two slots to be filled on the ‘Getting Started’ track. And this is how my presentation was born…

Nervous? Me?

It was to be the first training session I’d ever given on a topic such as this, so I was both very excited and a little nervous (geeks can be so nit-picky!).

Fortunately though, the bunch of folks that attended my session (some 30-odd I think) were all very friendly and eager to listen – I couldn’t have asked for a better group!

In the top 3? No way!

In fact, I think they were so nice they voted me in to the Top 3 “Speakers by Knowledge of Subject” and “Speakers by Presentation Skills” – accolades that I will soon be transferring onto a tattoo on my forehead, such is the level of my humility (and astonishment!) at appearing here with these two other fantastic speakers. Maybe it had something to do with the fact I was lobbing ‘Telerik Ninjas’ – stress toys – at anyone who asked a question (as a reward, folks – not as punishment)…

By Knowledge of Subject

  1. Steve Sanderson and Getting Started in ASP.NET MVC – 8.88 out of 10
  2. Richard Campbell and Why Web Performance Matters – 8.85 out of 10
  3. Richard Parker (that’s me!) and Getting Started in The .NET Framework – 8.56 out of 10
By Presentation Skills
  1. Richard Campbell – 8.73 out of 10
  2. Richard Parker – 8.33 out of 10
  3. Steve Sanderson – 8.30 out of 10

Looking for the slides?

If you attended and are looking for a copy of the presentation, you can download it below. Well, it’s actually a PDF – handier if you want to stick it on your Kindle, for example.

Getting started with .NET (PDF, 2.4MB)

Find out when your next DDD event is

If you’ve never been to a DDD event, then stop whatever it is you’re doing right now (well, after you’ve finished reading this post, of course) and go figure out when the next one is. They’re all over the place now – even Australia! It won’t cost you a penny to go as the events are all supported by sponsorship, so you’ve really got no excuse to go. The speakers are excellent (yes, even at the events I don’t speak at) and you’ll get a chance to mingle with some very friendly and amazing folks.

I’ve attended these events in the past as a delegate and have always had an absolutely brilliant time. And, this time around I was fortunate enough to be able to attend as a speaker; an experience I enjoyed thoroughly and would love to repeat again (if they, and you, Dear Reader) will have me again …

The .NET community, put simply, rocks. You guys are awesome!

Five for 5: Five cool things you can do with the HTML 5 video element

Standard

The ability to embed video directly into an HTML5 page is pretty awesome. As a first class citizen in the HTML5 enabled browser, video is no longer an ‘outsider’ and so you can actually do some relatively cool things without any particular effort. Below, I examine five common things that you’d previously have had to solve with a reasonable degree of complexity.

#1 Specifying multiple file formats

Let’s take a sample player:

<video width="320" height="240" controls="controls">
<source src="video.ogg" type="video/ogg" />
Sorry, your browser does not support the video tag.
</video>

This works great, but only if the client can play Ogg video. If you have multiple formats of the same media, you can offer them up to the HTML5 video player as well and it will pick the first one which the client can play. It’s a little bit of magic, really:

<video width="320" height="240" controls="controls">
<source src="video.ogg" type="video/ogg" media="screen" />
<source src="video.mp4" type="video/mp4" media="screen" />
<source src="video.webm" type="video/webm" media="screen" />
Sorry, your browser does not support the video tag.
</video>

Pretty neat, don’t you think? Now, rather than giving up when the .ogg video cannot be played, the player will check to see if .mp4 or .webm videos can first be played, too.

#2 Specifying differently encoded media for different devices

So we’ve got our multiple-format player. But can we extend this further so that we can choose different video based on the capabilities of the particular client making the request? The answer is Yes.

Let’s assume we have ‘movie-hi.ogg’ and ‘movie-lo.ogg’ – both are essentially the same video but one is optimised for small screen and the other is optimised for a full HD experience. We can have the HTML5 video player figure out which one to load using source media attributes:

<video width="320" height="240" controls="controls">
<source src="movie-lo.ogg" type="video/ogg" media="screen and (min-width:320px)" />
<source src="movie-hi.ogg" type="video/mp4" media="screen and (min-width:720px)" />
Sorry, your browser does not support the video tag.
</video>

Let’s take a closer look at the media attribute. The screen keyword tells the HTMl5 video player we’re targeting computer screens, rather than say braille which would target braille feedback devices.  Next, we’ve got the boolean operator and which connects the statement on the left with the statement on the right, so that if both conditions are true the statement is satisfied and that will be the source element we use (you could replace and with not or a comma – which signifies the ‘or’ operator). Finally, we have the min-width selector, which could just as easily be max-width or min-height, for example. [There’s a big list of all the currently supported filter values over at w3schools).

#3 Making pages with video load faster

The HTMl5 video tag will attempt to pre-load the contents of the video element on page load by default. You can change this behaviour so that is loaded on-demand, by adding the preload attribute with the value none to the video element:

<video width="320" height="240" controls="controls" preload="none">
<source src="movie-lo.ogg" type="video/ogg" media="screen and (min-width:320px)" />
Sorry, your browser does not support the video tag.
</video>

#4 Using an image as a placeholder for video while it is loading

Using a single frame from somewhere within the video can be a nice way to give a user a hint of what the video contains. It’s also a great way to fill the video container with something relevant while the video is still loading in the background. Fortunately, this is nice and easy to do. Just add the poster attribute to the video element:

<video width="320" height="240" controls="controls" poster="image.jpg">
<source src="movie-lo.ogg" type="video/ogg" media="screen and (min-width:320px)" />
Your browser does not support the video tag.
</video>

#5 Execute custom javascript code triggered by video player events

As a first-class citizen in HTML5, the video element has a full-featured javascript event model that you can hook into in order to execute your own custom code. Here are a selection of my favourites:

  • oncanplay – fires whenever the media can start to play
  • onended – fires when the media has reached the end
  • onprogress – fires whenever the browser requests media from the server
Using these events, it is possible to hook into the video element to create a very rich, interactive UI that extends out of the video frame itself. This very extensible model gives you a lot of freedom to create!