Debugging Azure Web Roles with Custom HTTP Headers and Self-Registering HttpModules

Standard

For the past year-and-a-half, I’ve been helping customers to develop web applications targeting Windows Azure PaaS. Typically, customers ask lots of similar questions and these are usually because they’re faced with similar challenges (there really isn’t such a thing as a bad question). I’ve recently had to answer this very question a few times in succession, so I figured that makes it a good candidate for a blog post! As always, I’d love to get your feedback and if you find this tip useful I’ll try to share some more common scenarios soon.

The scenario I want to focus on here today is nice and quick. It’s a reasonably common scenario in which you’ve deployed a web application (let’s say, a WebAPI project) to Azure PaaS and have more than a handful of instances serving-up requests.

Sometimes it’s tricky to determine which role instance served-up your request

When you’re developing and testing, you quite often need to locate the particular node which issued a an HTTP response to the client.

When the total number of instances serving your application are low, cycling through one or two instances of a web role (and connecting up to them via RDP) isn’t a particular issue. But as you add instances you don’t typically know which server was responsible for servicing the request, thus you have more to check or ‘hunt through’. This can make it more difficult for you to quickly jump to the root of the problem for further diagnosis.

Why not add a custom HTTP header?

In a nutshell, one possible way to help debugging calls to an API via HTTP is to have the server inject a custom HTTP header into the response which emits the role instance ID. A switch in cloud configuration (*.cscfg) can be added which allows you to turn this feature on or off, so you’re not always emitting it. The helper itself (as you’ll see below) is very lightweight and you can easily modify it to inject additional headers/detail into the response. Also, emitting the role instance ID (i.e. 0, 1, 2, 3 …) is preferable to emitting the qualified server name, for security reasons, and doesn’t really give too much info away to assist a would-be attacker.

How’s it done?

It’s rather simple and quick, really. And, you can borrow the code below to help you out but do remember to check it meets your requirements and test it thoroughly before chucking it into production! We start by creating an HTTP module in the usual way:

public class CustomHeaderModule : IHttpModule
    {

        public static void Initialize()
        {
            HttpApplication.RegisterModule(typeof(CustomHeaderModule));
        }

        public void Init(HttpApplication context)
        {
            ConfigureModule(context);
        }

        private void ConfigureModule(HttpApplication context)
        {
            // Check we're running within the RoleEnvironment and that
            // our configuration setting ("EnableCustomHttpDebugHeaders") is present. This is our "switch", effectively...
            if (RoleEnvironment.IsAvailable && RoleEnvironment.GetConfigurationSettingValue("EnableCustomHttpDebugHeaders"))
            {
                context.BeginRequest += ContextOnBeginRequest;
            }
        }

        private void ContextOnBeginRequest(object sender, EventArgs eventArgs)
        {
            var application = (HttpApplication)sender;
            var response = application.Context.Response;

            // Inject custom header(s) for response

            var roleName = RoleEnvironment.CurrentRoleInstance.Role.Name;
            var index = RoleEnvironment.Roles[roleName].Instances.IndexOf(RoleEnvironment.CurrentRoleInstance);
            response.AppendHeader("X-Diag-Instance", index.ToString());
        }

        public void Dispose()
        {
        }
    }

What we’ve got here is essentially a very simple module which injects the custom header, “X-Diag-Instance”, into the server’s response. The value for the custom header will be the index of the instance of the role in the Instances collection property of Role.

Deploying the module

Then, we want to add a little magic to have the module self-register at runtime (sure, you can put this in config if you really want to). This is great, because you could put the module into a shared library and then simply have it register itself into the pipeline automatically. Of course, you could actually substitute the config switch for a check to determine whether the solution is in debug or release mode, too (customise it to fit your needs).

To do the self-registration, we rely on a little known but extremely useful ASP.NET 4 extensibility feature called PreApplicationStartMethod. Decorating the assembly with this attribute allows the .NET framework to discover your module and auto-register it:

[assembly: PreApplicationStartMethod(typeof(PreApplicationStartCode), "Start")]
namespace MyApplication
{
    public class PreApplicationStartCode
    {
        public static void Start()
        {
            Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(CustomHeaderModule));
        }
    }

    public class CustomHeaderModule : IHttpModule
    {
      // ....
    }
}

This approach also works well for any custom headers you want to inject into the response, and a great use case for this would be to emit custom data you want to collect as part of a web performance or load test.

I hope you find this little tip and the code snippet useful, and thanks to @robgarfoot for the pointer to the super useful self-registration extensibility feature!

How to speed up your ASP.NET web application

Standard

If your web site is slow, it’s annoying to your customers. It’s annoying because nobody likes to wait: we wait all day in the physical world: in queues at the shops, at the restaurant and even on the telephone. We’re always looking for ‘faster’, because in our web consumer minds, “faster equals better”. In my personal experience as a software developer, most users share at least one principle:

Better responsiveness equals a better product
– A. Customer

If your application is simple and responsive, people will use it. If it is clunky and slow to load, people are forced to wait. Think of your application (it doesn’t matter if it’s a web or a desktop application) as a racing car. As the manufacturer of that car, you’ll want customers to come and test drive it. You’ll hope that they’ll fall in love with it after driving it, and want to buy it. If that test drive is a good experience, they’ll hopefully part with some of their hard earned cash to pay for it – and bingo, you’ve done what you needed to do: make the sale. 

The same principle applies to software: if you deliver a fast, responsive application with a quick user interface, your users are more likely to think you’ve built a better product – (whether that’s right or technically wrong), because to Mr and Mrs User, a slow application is a bad one.

You can optimise your web site in just a few steps

As an ASP.NET developer, here’s a look (or a reminder) at some of the things you can look at doing before deciding it’s time to dig under the hood and start to make more fundamental changes in your application:

Disable debugging in your web.config

When you release an application in debug mode, ASP.NET forces certain files to be sent to the client with each request, instead of allowing the browser to cache them. Most people forget to switch debug mode off when they release. This creates an overhead for your server, and a longer wait for the client. Debug mode also causes other changes in your web application: think of it as a bloaty way to release because it has to include data and various hooks to allow you to debug the application that aren’t necessary in order to run it:

<compilation debug="false"/>

You’ll find the above line in your web.config file.

Enable IIS Request Compression

Request compression is a feature of Internet Information Services 6 and above that causes content to be compressed before transmission to the client, and then decompressed by the browser. Most modern browsers support this, and enabling it requires no modification to your web site at all. Do bear in mind that request compression will force your web server to work harder because it has to first compress data before sending it. This creates a small spike in CPU usage, for low to medium traffic web sites that really need a performance boost the extra CPU usage will more than likely be absorbed just fine.

In Internet Information Services 6:

  1. Launch IIS Manager
  2. Right-click the “Web Sites” node
  3. Click “Properties”
  4. Select the “Service” tab
  5. Tick “Compress application files” and “Compress static files”. Be sure to specify a temporary directory with sufficient free resources and consider adding a maximum limit to the temporary directory size.
  6. Click “Apply”
  7. Click “OK”

Request compression isn’t for everybody – be sure to weigh the pro’s and con’s for your particular environment.

Use page output caching

By default, IIS thinks that your ASP.NET page is dynamic. In many applications, however, not all the pages actually are. Even if they do rely on a database for content, oftentimes it’s not necessary to hit the database on each request to the page. Output caching can be enabled on a particular page by adding one line of code to the top of your ASPX file. It is a directive that informs .NET to keep a copy of the rendered page, and serve the copy (rather than the original) from disk each time it is called. This would include, for example, any database generated content from controls on the page itself, or any embedded user controls.

<%@ OutputCache Duration="10" VaryByParam="none"%>

Page output caching can be an extremely effective way to improve your web site’s performance and responsiveness. It’s a lot more flexible than I’ve explained here, and you should be aware that there are all manner of ways in which you can control the cached version of the page (for instance, you can modify the directive to have different cached versions of the page based on a URL parameter). For more information, see the MSDN documentation.

Next steps

When you’ve done these things, if your application could still use a boost, it’s time to start profiling. You’ve tried the ‘quick fixes’ – the 10 minute jobs that are more-than-likely going to make things better, but there’s always a chance the problem isn’t with your application per sé. The next step is to figure out what’s causing the problem. First identify the scope: is it limited to one user, or a bunch of users in a particular geographic region, or is it everybody? If it’s only a small bunch of people, it might be that your ISP is having routing issues and you need do nothing at all. On the other hand, you might find that everyone is affected by the issue.

In that case what you need to do is to investigate where your bottleneck is occurring. Is it your database? Is it your disks? Or is it, yes, hold on a second – more than likely it’s the things you’ve probably overlooked: your images and other media files.

Optimising your images

Many people, particularly in smaller teams, overlook image optimisation. Most image editing programs will optimise for you – and this can often reduce a file’s size anywhere between 5% and 20%, and sometimes more. With today’s media rich sites, look at what you can do to ease the burden.

Using a content delivery network

As your web site grows ever more popular, sometimes the best way to get a performance boost is to let somebody else handle delivery of your ‘resource files’ – these are your static images, scripts, movies, SWF files, etc. One option is to purchase more bandwidth from your supplier. Another is to enlist the support of a Content Delivery Network – kind of like a private, global internet with public endpoints close to your customers.

The benefit of a CDN is that you are effectively outsourcing the delivery of your static files onto another – usually much faster – network. Often this will result in an ability for your server to handle more connections than before, since it no longer has to worry about serving up the big files over and over again.

Going direct to one of the big networks can cost anywhere from about $1,000 per month upwards, but there are companies who provide full CDN integration for a fraction of the price.

 Good luck with your web site optimisation and please feel free to leave comments and tips for others.