Creating amazing experiences for your customers through powerful combinations of devices and software

Standard

… and why “what if” are two words you should care about

Today’s consumers increasingly focus on their experience of your products.

I’m not talking about UI (how cool your interface is), or what functionality you provide. Good UX design is a factor, but what I’m talking about is something that arguably goes beyond your app, beyond the device I use it on, and shapes and colours the overall experience I have of your company both now, and into the future.

You can think of it as your digital first impression (an ‘experience fingerprint’) – and it lasts.

To understand what I mean, we need think more about what technology enables us to achieve in our professional and personal lives today, shifting our focus away to supporting our business and in turn, focusing on what we’ve always done.

Think more ‘sci-fi’!

Five years ago some of the crazy things we can do today were simply impossible, or way too expensive to be viable. Failing that, the tooling or backend services weren’t there to truly help us realise those visions.

We take IT for granted today, and we forgot that it was always meant to be an enabler and instead we often treat it is a servant that we gradually just ask and expect more and more of.

Take Bob, for example. Bob wants to email a report to Sharon in Finance, after he’s reviewed it now that Jane added the sales figures Bill pulled out of the sales order system earlier.

“Machine! Here’s some data, munge it into a report for me and email it to Sharon in Finance”.

We ‘innovate’ today by making that processing activity faster, or by supporting more simultaneous processing. And that’s great and it has a place – but what have we achieved there for Bob? Does he care that we’ve put the collective IT advancement of the past few years (probably at considerable cost) to such good use that he gets his report 2 seconds quicker than before? Maybe not…

What if we offered to read Bob just the changes since he last saw it, because he’s driving home from the office and wanted to set off early to avoid the traffic because Cortana warned him he’d not make it on time to pick-up his kids from school? Boom. We just transformed (a small, but impactful) part of Bob’s life.

So we start to see that great devices play into our experiences – they’re portals that can be our proverbial best friends, or the mother-in-law you have to invite to the party but don’t really want to. Do you want your business to be that mother-in-law, or my best friend? Because I can tell you for sure that I’m less likely to ditch my best friend than someone who isn’t when push comes to shove I’m making my purchasing decisions.

And therefore, we introduce the notion of connection, more specifically, emotional connection. Great devices amplify the kinetic experience (and therefore the emotional bond) I make with your products, but only if yours ‘feels’ at home on that device. How your product feels to me suddenly becomes less about what it looks like and how it operates but how it ‘feels’ to fling files around on, between devices both in your ecosystem and out. How I can consume and produce on that device matters equally to what I can consume, or produce.

In tomorrow’s world, the end-to-end experience of your product should be a first class citizen on your product backlog because in order to win the battle for market share we shouldn’t be competing based on tick-the-box features alone. Ever wondered sometimes why the ‘inferior’ (feature-wise) alternatives seem to do better than yours? Sure, it could simply be they did a better job of marketing. But what’s marketing if it’s not the attempt to influence a purchasing decision by painting a picture of what it could be like to own that product? That’s emotional. And you need me to feel good about your product before I buy it or recommend it to others. Isn’t that customer experience?

You see, consumers make choices with their hearts and much less so with their minds (ask any car dealer), and we’d be foolish to think that our ‘consumer mindset’ isn’t following us to the workplace, where we tend to make larger more financially impactful decisions, than when we’re at home. We think less about consequence of making the wrong decision when we’re in the consumer mindset because we’re more focused on the promise of having it. Companies that have a great ecosystem and offer amazing experiences to their customers (their consumers) are therefore in a much better position to exploit the consumer mindset. And we can’t do that if we’re stuck in what I call the ‘business software mentality’ of 1995, which really isn’t that uncommon: we just use new tools to knock-out similar stuff (competing on similar levels but through new channels) and with more polish and speed. Is that innovation?

What is ‘business software’*, anyway? Is it software that I use at work, or that I use on my device that I take home? Is it still business software then, when I’m lying on my couch at home trying to approve that report?

Check it out, the results aren’t breath-taking…

Stop. Take a look around you. Outside your office maybe, or in your car. There are a million everyday things (or processes) you could improve with the incredible array of devices and software products at your disposal, inexpensively today. Do you honestly lever the full power and spectrum of both hardware and software available to you to create immersive experiences that I can connect with, and you can use to connect me to my information?

We’re right on the forefront of something incredible and all it takes for someone to revolutionise our connection with IT now is a truly ambitious interconnection of services (software) accessible in innovative ways through our devices.

And this, dear reader, is why I think you should care: because you’ve no doubt arrived at a similar conclusion; the concept of a union between devices and software isn’t really new. IT has always been about enabling people to do things we cannot do more easily alone or person-to-person.

You need to think about creating those end-to-end experiences, and take a 100,000ft view at the devices and services landscape to figure out what you could do to blow people’s minds. Innovate over a small business process, or transform an industry: I don’t care! Just innovate! Because when you’re in that mind set, you are your most creative, and you’re more likely to succeed.

And whether you agree or not with anything I’ve said, consider this:

You need your thinkers to be asking more “what if?”, not “what next”.

What If asks for innovation. What Next just begs for iteration.

Create, amaze, inspire; it’s easier today than it was just a few years ago.

Advertisements

Release Notes: Reading Traffic v1.3

Standard

I’ve just submitted an update to my Reading Traffic Info app for Windows Phone 8 and it’ll hopefully be available for download in the Windows Phone marketplace within the next week (subject to certification testing).

This new version, 1.3, focuses on some improvements to the overall app usability as well as the introduction of a new feature, the “favourites spy”, which lets you view just the latest thumbnails from your favourite cameras in a vertical list, that updates automatically.

In addition, now that the number of users is growing to a reasonable size (and crucially, the number of repeat users is pretty high), this version incorporates a number of backend features to allow me to collect anonymised telemetry about how the app is being used. This is especially useful when considering that I only get a few hours per week, maximum, during my otherwise ‘free time’, to maintain the app: so knowing where to invest that time is absolutely crucial!

Release Notes

  • Fixed some minor problems with 6 of the cameras in the feed; thanks for the error reports.
  • Updated the application bar on the main menu (added a ‘settings’ icon, and a ‘favourites spy’ icon)
  • Removed the ‘settings’ application bar menu item (because it’s now replaced with an icon button)
  • Created a ‘favourites spy’ feature, which shows you auto-refreshing thumbnail images from all the cameras you’ve marked as ‘favourite’, in a single, vertically-scrolling list
  • Added landscape support to the camera detail page (click to view a specific camera, then rotate into landscape orientation to go full screen on the camera image).
  • Incorporated Flurry analytics to help me figure out which cameras are most popular among users, as well as where people are when they use the app (i.e. proximity to the cameras they’re looking at). This should help me design new features that make the app easier to use, and more useful. If you want to disable this functionality, you have to disable location services (which you can do either in the app itself (tap settings > location services) or in the phone O/S itself.

Debugging Azure Web Roles with Custom HTTP Headers and Self-Registering HttpModules

Standard

For the past year-and-a-half, I’ve been helping customers to develop web applications targeting Windows Azure PaaS. Typically, customers ask lots of similar questions and these are usually because they’re faced with similar challenges (there really isn’t such a thing as a bad question). I’ve recently had to answer this very question a few times in succession, so I figured that makes it a good candidate for a blog post! As always, I’d love to get your feedback and if you find this tip useful I’ll try to share some more common scenarios soon.

The scenario I want to focus on here today is nice and quick. It’s a reasonably common scenario in which you’ve deployed a web application (let’s say, a WebAPI project) to Azure PaaS and have more than a handful of instances serving-up requests.

Sometimes it’s tricky to determine which role instance served-up your request

When you’re developing and testing, you quite often need to locate the particular node which issued a an HTTP response to the client.

When the total number of instances serving your application are low, cycling through one or two instances of a web role (and connecting up to them via RDP) isn’t a particular issue. But as you add instances you don’t typically know which server was responsible for servicing the request, thus you have more to check or ‘hunt through’. This can make it more difficult for you to quickly jump to the root of the problem for further diagnosis.

Why not add a custom HTTP header?

In a nutshell, one possible way to help debugging calls to an API via HTTP is to have the server inject a custom HTTP header into the response which emits the role instance ID. A switch in cloud configuration (*.cscfg) can be added which allows you to turn this feature on or off, so you’re not always emitting it. The helper itself (as you’ll see below) is very lightweight and you can easily modify it to inject additional headers/detail into the response. Also, emitting the role instance ID (i.e. 0, 1, 2, 3 …) is preferable to emitting the qualified server name, for security reasons, and doesn’t really give too much info away to assist a would-be attacker.

How’s it done?

It’s rather simple and quick, really. And, you can borrow the code below to help you out but do remember to check it meets your requirements and test it thoroughly before chucking it into production! We start by creating an HTTP module in the usual way:

public class CustomHeaderModule : IHttpModule
    {

        public static void Initialize()
        {
            HttpApplication.RegisterModule(typeof(CustomHeaderModule));
        }

        public void Init(HttpApplication context)
        {
            ConfigureModule(context);
        }

        private void ConfigureModule(HttpApplication context)
        {
            // Check we're running within the RoleEnvironment and that
            // our configuration setting ("EnableCustomHttpDebugHeaders") is present. This is our "switch", effectively...
            if (RoleEnvironment.IsAvailable && RoleEnvironment.GetConfigurationSettingValue("EnableCustomHttpDebugHeaders"))
            {
                context.BeginRequest += ContextOnBeginRequest;
            }
        }

        private void ContextOnBeginRequest(object sender, EventArgs eventArgs)
        {
            var application = (HttpApplication)sender;
            var response = application.Context.Response;

            // Inject custom header(s) for response

            var roleName = RoleEnvironment.CurrentRoleInstance.Role.Name;
            var index = RoleEnvironment.Roles[roleName].Instances.IndexOf(RoleEnvironment.CurrentRoleInstance);
            response.AppendHeader("X-Diag-Instance", index.ToString());
        }

        public void Dispose()
        {
        }
    }

What we’ve got here is essentially a very simple module which injects the custom header, “X-Diag-Instance”, into the server’s response. The value for the custom header will be the index of the instance of the role in the Instances collection property of Role.

Deploying the module

Then, we want to add a little magic to have the module self-register at runtime (sure, you can put this in config if you really want to). This is great, because you could put the module into a shared library and then simply have it register itself into the pipeline automatically. Of course, you could actually substitute the config switch for a check to determine whether the solution is in debug or release mode, too (customise it to fit your needs).

To do the self-registration, we rely on a little known but extremely useful ASP.NET 4 extensibility feature called PreApplicationStartMethod. Decorating the assembly with this attribute allows the .NET framework to discover your module and auto-register it:

[assembly: PreApplicationStartMethod(typeof(PreApplicationStartCode), "Start")]
namespace MyApplication
{
    public class PreApplicationStartCode
    {
        public static void Start()
        {
            Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(CustomHeaderModule));
        }
    }

    public class CustomHeaderModule : IHttpModule
    {
      // ....
    }
}

This approach also works well for any custom headers you want to inject into the response, and a great use case for this would be to emit custom data you want to collect as part of a web performance or load test.

I hope you find this little tip and the code snippet useful, and thanks to @robgarfoot for the pointer to the super useful self-registration extensibility feature!

Autonomous Immersive Quadcopter – Build Log

Standard

It’s been a while since my last post back in December (all about Windows Azure, you can read about that here) and a lot has been going on in the interim. I’ve mainly been focused (as always) on work, but in my down time, I’ve been working on somethin’ a little special.

I’ve always been fascinated by things that fly: aeroplanes, helicopters, birds, bees, even hot air balloons and solar wings. Flight gives us an opportunity to view things from a different perspective; it opens a new world to explore.

The trouble is, as a human, I was born without the (native) ability to fly. And that’s always made me a little, well, sad.

A couple of years ago, I started toying with a model aeroplane and my goal at that point was to turn that into a UAV, like so many of the projects I’d seen online. I ended up dismissing the idea, for a couple of reasons: planes are pretty limited (manoeuvrability-wise), and unless you can fly yours incredibly high and incredibly fast, you’re a little limited to the types of cool things you can do. Plus, the open-source autopilots that are currently available are mainly all built using non-Microsoft technologies, and, being a “Microsoft guy”, I wanted to do something about that (let’s say it’s just for selfish purposes: I’m much more productive using Microsoft technologies than I am with something like C on the Arduino platform, and I have very limited time for this project).

So I’ve been working on building a custom quadcopter since January, and I’m very pleased with the results so far. It flies, and in this video you’ll see the first test flight. Toward the end, just before the ‘aerobatics’, I disable the automatic flight stabilisation remotely, which causes even more aerobatics. Anyway, the quadcopter was fun to build, and was a huge learning curve for me: and I really enjoyed the challenge of having to figure out all the flight dynamics, propeller equations, lift calculations and of course, the designing and building of the frame, electrical and radio systems.

But it’s not awesome enough yet, not anywhere near it! In fact, check out some of the plans:

  1. I’m currently building a three-axis motorised gimbal that will fit underneath the main airframe. It is going to be connected to an Oculus Rift virtual reality stereoscopic headset, which will relay movements of the wearer’s head to the servos on the gimbal; thus enabling you to ‘sit’ and experience flight from within the virtual cockpit. My colleague, Rob G, is currently building the most awesome piece of software to power the Oculus’ dual stereoscopic displays, while I finish designing and building the mount and video transmission system.
  2. Cloud Powered AutoPilot and Flight Command. That’s right: using Windows Azure, I will provide command and control functionality using Service Bus and sophisticated sensor logging through Azure Mobile Services. Flight data and video will be recorded and shared real-time with authenticated users. Why? There’s nothing cooler than Windows Azure, except maybe something that flies around actually in the clouds, powered by the Cloud!

I don’t know where this project will end up taking me, but so far it’s taken me on some very interesting journeys. I’ve had to learn much more about:

  • Circuit design
  • Fluid dynamics
  • Thrust and vector calculations
  • Power system design
  • Radio-control systems (on various frequencies: 5.8GHz, 2.4GHz, 433MHz and 968MHz) and the joys of trying to tame RF energy using antennae
  • Soldering

… The list goes on!

Current Activity

I’m already in the progress of building a sensor network on the quadcopter. This comprises of:

  • 5 x Ultrasonic range finders (four mounted on each of the motor arms, one downward-facing)
  • 1 x Barometric pressure (for altitude and airspeed sensing, using pitot tubes)
  • 1 x 66-satellite GPS tracking module
  • 1 x Triple-axis accelerometer
  • 1 x Triple-axis gyroscope

The current plan is to use a Netduino to interface directly with the sensors, and transform all of the sensor data into a proprietary messaging standard, which will be emitted via the I2C interface using a message bus architecture. In this way, the Netduino is able to create ‘virtual’ sensors, too, such as:

  • Altitude (based on either the downward-facing ultrasonic sensor, or the barometric pressure sensor; whenever the quad moves out of range of the ultrasonic sensor)
  • Bearing
  • Velocity (allowing selection of air/ground speed)

The Netduino is an amazing device, however it doesn’t have sufficient capacity or processing power on-board to interface with the radio control receiver (which receives pitch, roll, yaw, throttle and other inputs from my handheld transmitter). For this activity, I’m going to use a Raspberry Pi (running Mono!). The RPi apparently features the ability to turn GPIO pins into PWM-capable pins (either generating, or interpreting), which is exactly what I need.  The Netduino will output the sensor data to the RPi, which will be running the ‘autopilot’ system (more on the planned autopilot modes in a later post).

It’ll be the job of the Raspberry Pi to interpret sensor data, listen to commands it has received from the handheld transmitter on the ground, and decide what action to take, based on the requested input and the sensor data, and the currently-selected autopilot mode. Sounds simple, but it’s anything but!

If you’re not interested in the technical details, you can follow this project on hoverboard.io. Thanks for reading.

Remote Debugging a Windows 8 RT app on Surface with BT Infinity & HomeHub 3.0

Standard

I ran across an interesting problem today, and I thought I’d blog about it as it may save you some time if you encounter a similar issue in the future.

Scenario: you’ve deployed Visual Studio 2012 Update 1 Remote Debugging Tools to your Surface RT device, and running Visual Studio 2012 Update 1 on your desktop PC (x64, in my case). When you attempt to remote debug on Surface, Visual Studio 2012 reports that it cannot connect to MSVSMON.exe on the remote device.

Background: for testing purposes, I disabled the firewalls on both the Surface and the desktop PC, and I tried configuring MSVSMON.exe to work with and without authentication on port 4016. Visual Studio 2012 Update 1 could never discover the Surface, either, unless I ran MSVSMON.exe as a service on the Surface. As a service, my developer machine could discover the Surface but even then, still couldn’t connect.

For reference, the developer machine was connected via ethernet, and the Surface (obviously) via WiFi to the same router.

Ping from the desktop to Surface failed, but it did resolve the IP address. Ping from the Surface back to the desktop always worked, returning an IPv4 address.

After trying many things for several hours, I tried changing my router because I believed what I was seeing was symptomatic of a networking issue. This immediately cured the problem.

It would seem, at least in my case, that my BT HomeHub 3.0 prevented establishing a connection to MSVSMON.exe between LAN and WiFi. I don’t know why – I can only assume perhaps there is a firmware issue on the HomeHub 3.0.

I can’t verify it with another HomeHub as I don’t have access to a replacement router, however swapping it out for a brand new Netgear DGND3700 did the trick nicely. If you have a HomeHub 3.0 and are on BT Infinity, please let me know if you can reproduce this issue.

Unattended installation of SQL Server 2008 R2 Express on an Azure role

Standard

In certain circumstances, you might find yourself with a need to install SQL Server Express on one of your Windows Azure worker roles. Exercise caution here though folks: this is not a supported design pattern (remember, a restart of your role instance will cause all data to be lost).

It was however exactly what I needed for my scenario and I thought I’d share it in case it serves a purpose for you.

There are a couple of approaches you can take, of course, one of which is ‘startup tasks’ specified in the service definition files. However, these offered me limited configuration options because I needed to customise some of the command line arguments being passed to the installer based on values from the Role Environment itself.

The trickiest part was actually figuring out the correct command line parameters for SQL Server 2008 R2 Express, which to be honest wasn’t that fiddly at all. Here are the parameters you’ll need:

/Q/ACTION=Install/FEATURES=SQLEngine,Tools /INSTANCENAME=YourInstanceName
/HIDECONSOLE /NPENABLED=1 /TCPENABLED=1 /SQLSVCACCOUNT=”.\YourServiceAccount\” /SQLSVCPASSWORD=”YourServicePassword” /SQLSYSADMINACCOUNTS=”\.\ADMINACCOUNT” /IACCEPTSQLSERVERLICENSETERM S/INSTALLSQLDATADIR=”FullyQualifiedPathToFolder”

In the parameters above, we’re specifying a silentinstall with the /Qparameter, installing the SQL Database Engine and Management Tools (basic) with the /FEATURESparameter, setting the instance name, enabling named pipes and TCP, while setting service accounts and specifying the SQL data directory.

The next part then, is to actually build this as a command line and execute it in the cloud environment. How do we do this? Simples: we use System.Diagnostics to create a new Process()object and pass in a ProcessStartInfoobject as a parameter:

var taskInfo=new ProcessStartInfo
{
FileName=file,
Arguments=args,
Verb="runas",
UseShellExecute=false,
RedirectStandardOutput=true,
RedirectStandardError=true,
CreateNoWindow=false
};
//Start the process
_process=new Process(){StartInfo=taskInfo,EnableRaisingEvents=true};

For good measure, we’ll also redirect the standard and error output streams from the process so that we can capture those out to our log files:

//Log output
DataReceivedEventHandler outputHandler=(s,e)=>Trace.TraceInformation(e.Data);
DataReceivedEventHandler errorHandler=(s,e)=>Trace.TraceInformation(e.Data);

//Attach handlers
_process.ErrorDataReceived+=errorHandler;
_process.OutputDataReceived+=outputHandler;

Then, we’ll execute our task and ask the role to wait for it to complete before continuing with startup:

//Start process
_process.Start();
_process.BeginErrorReadLine();
_process.BeginOutputReadLine();

// Wait for the task to complete before continuing...
_process.WaitForExit();

Stick all of that into a method that you can re-use, and don’t forget to add parameters called fileand args(strings) that contain the path to the SQL Server Express installation executable and the command line arguments you want to pass in.

How to build your command line argument

If you’re wondering why I didn’t hardcode my command line options, it’s because up in Azure, the standard builds for web and worker roles don’t come preloaded with any administrative accounts – you have to specify those during design time. I actually ‘borrow’ the username of the Remote Desktop user (which is provisioned as an administrator for you when you ask to enable Remote Desktop).

I actually end-up with this quick-and-dirty snippet:

string file=Path.Combine(UnpackPath,"SQLEXPRWT_x64_ENU.exe");
string args=string.Format("/Q/ACTION=Install/FEATURES=SQLEngine,TOOLS/INSTANCENAME={2}/HIDECONSOLE/NPENABLED=1/TCPENABLED=1/SQLSVCACCOUNT=\".\\{0}\"/SQLSVCPASSWORD=\"{1}\"/SQLSYSADMINACCOUNTS=\".\\{0}\"/IACCEPTSQLSERVERLICENSETERMS/INSTALLSQLDATADIR=\"{3}\"", username,password,instanceName,dataDir);

So, ultimately, you’ll then want to wrap all of this up in to your role’s OnStart() method. Include a check to see whether SQL Express is already installed, too.

And, if you’re stuck trying to debug what’s going on with your otherwise silent installation, SQL Server Setup Logs are your friend. You’ll find them by connecting to your role via Remote Desktop and opening the following path:

%programfiles%\Microsoft SQL Server\100\Setup Bootstrap\Log\

Enjoy!

DDD Southwest 3 – Review of my presentation

Standard

Slides everywhere, but not a coherent flow in sight! 🙂

Way back on 11th June 2011, I was lucky enough to be invited to present my session – “Getting Started in .NET” – at the DDD Southwest 3 conference. I remember thinking, “gosh, I’d really love to speak at one of these events but I missed the deadline for submitting sessions”. So, I pinged an email over to Guy Smith-Ferrier and asked him if they needed any help, thinking maybe they’d want room monitors or other volunteers to ferry folks around. As it turned-out, Guy actually still had two slots to be filled on the ‘Getting Started’ track. And this is how my presentation was born…

Nervous? Me?

It was to be the first training session I’d ever given on a topic such as this, so I was both very excited and a little nervous (geeks can be so nit-picky!).

Fortunately though, the bunch of folks that attended my session (some 30-odd I think) were all very friendly and eager to listen – I couldn’t have asked for a better group!

In the top 3? No way!

In fact, I think they were so nice they voted me in to the Top 3 “Speakers by Knowledge of Subject” and “Speakers by Presentation Skills” – accolades that I will soon be transferring onto a tattoo on my forehead, such is the level of my humility (and astonishment!) at appearing here with these two other fantastic speakers. Maybe it had something to do with the fact I was lobbing ‘Telerik Ninjas’ – stress toys – at anyone who asked a question (as a reward, folks – not as punishment)…

By Knowledge of Subject

  1. Steve Sanderson and Getting Started in ASP.NET MVC – 8.88 out of 10
  2. Richard Campbell and Why Web Performance Matters – 8.85 out of 10
  3. Richard Parker (that’s me!) and Getting Started in The .NET Framework – 8.56 out of 10
By Presentation Skills
  1. Richard Campbell – 8.73 out of 10
  2. Richard Parker – 8.33 out of 10
  3. Steve Sanderson – 8.30 out of 10

Looking for the slides?

If you attended and are looking for a copy of the presentation, you can download it below. Well, it’s actually a PDF – handier if you want to stick it on your Kindle, for example.

Getting started with .NET (PDF, 2.4MB)

Find out when your next DDD event is

If you’ve never been to a DDD event, then stop whatever it is you’re doing right now (well, after you’ve finished reading this post, of course) and go figure out when the next one is. They’re all over the place now – even Australia! It won’t cost you a penny to go as the events are all supported by sponsorship, so you’ve really got no excuse to go. The speakers are excellent (yes, even at the events I don’t speak at) and you’ll get a chance to mingle with some very friendly and amazing folks.

I’ve attended these events in the past as a delegate and have always had an absolutely brilliant time. And, this time around I was fortunate enough to be able to attend as a speaker; an experience I enjoyed thoroughly and would love to repeat again (if they, and you, Dear Reader) will have me again …

The .NET community, put simply, rocks. You guys are awesome!