I’ve moved!

Standard

Starting right now, this blog has been moved to http://blog.rikp.co.uk. All original posts and comments have moved, too – so nothing is lost. And I’ll maintain this old one for a while.

Most old links to blog.richard.parker.name/ should redirect to the new article at blog.rikp.co.uk/ without issue, but if you find one that doesn’t work, please let me know.

If you subscribed to this blog, your subscription will no longer be valid – sorry. I couldn’t see any way to migrate subscribers. There’s a subscription facility you can use on the new site, though.

Cheers everyone!

Creating amazing experiences for your customers through powerful combinations of devices and software

Standard

… and why “what if” are two words you should care about

Today’s consumers increasingly focus on their experience of your products.

I’m not talking about UI (how cool your interface is), or what functionality you provide. Good UX design is a factor, but what I’m talking about is something that arguably goes beyond your app, beyond the device I use it on, and shapes and colours the overall experience I have of your company both now, and into the future.

You can think of it as your digital first impression (an ‘experience fingerprint’) – and it lasts.

To understand what I mean, we need think more about what technology enables us to achieve in our professional and personal lives today, shifting our focus away to supporting our business and in turn, focusing on what we’ve always done.

Think more ‘sci-fi’!

Five years ago some of the crazy things we can do today were simply impossible, or way too expensive to be viable. Failing that, the tooling or backend services weren’t there to truly help us realise those visions.

We take IT for granted today, and we forgot that it was always meant to be an enabler and instead we often treat it is a servant that we gradually just ask and expect more and more of.

Take Bob, for example. Bob wants to email a report to Sharon in Finance, after he’s reviewed it now that Jane added the sales figures Bill pulled out of the sales order system earlier.

“Machine! Here’s some data, munge it into a report for me and email it to Sharon in Finance”.

We ‘innovate’ today by making that processing activity faster, or by supporting more simultaneous processing. And that’s great and it has a place – but what have we achieved there for Bob? Does he care that we’ve put the collective IT advancement of the past few years (probably at considerable cost) to such good use that he gets his report 2 seconds quicker than before? Maybe not…

What if we offered to read Bob just the changes since he last saw it, because he’s driving home from the office and wanted to set off early to avoid the traffic because Cortana warned him he’d not make it on time to pick-up his kids from school? Boom. We just transformed (a small, but impactful) part of Bob’s life.

So we start to see that great devices play into our experiences – they’re portals that can be our proverbial best friends, or the mother-in-law you have to invite to the party but don’t really want to. Do you want your business to be that mother-in-law, or my best friend? Because I can tell you for sure that I’m less likely to ditch my best friend than someone who isn’t when push comes to shove I’m making my purchasing decisions.

And therefore, we introduce the notion of connection, more specifically, emotional connection. Great devices amplify the kinetic experience (and therefore the emotional bond) I make with your products, but only if yours ‘feels’ at home on that device. How your product feels to me suddenly becomes less about what it looks like and how it operates but how it ‘feels’ to fling files around on, between devices both in your ecosystem and out. How I can consume and produce on that device matters equally to what I can consume, or produce.

In tomorrow’s world, the end-to-end experience of your product should be a first class citizen on your product backlog because in order to win the battle for market share we shouldn’t be competing based on tick-the-box features alone. Ever wondered sometimes why the ‘inferior’ (feature-wise) alternatives seem to do better than yours? Sure, it could simply be they did a better job of marketing. But what’s marketing if it’s not the attempt to influence a purchasing decision by painting a picture of what it could be like to own that product? That’s emotional. And you need me to feel good about your product before I buy it or recommend it to others. Isn’t that customer experience?

You see, consumers make choices with their hearts and much less so with their minds (ask any car dealer), and we’d be foolish to think that our ‘consumer mindset’ isn’t following us to the workplace, where we tend to make larger more financially impactful decisions, than when we’re at home. We think less about consequence of making the wrong decision when we’re in the consumer mindset because we’re more focused on the promise of having it. Companies that have a great ecosystem and offer amazing experiences to their customers (their consumers) are therefore in a much better position to exploit the consumer mindset. And we can’t do that if we’re stuck in what I call the ‘business software mentality’ of 1995, which really isn’t that uncommon: we just use new tools to knock-out similar stuff (competing on similar levels but through new channels) and with more polish and speed. Is that innovation?

What is ‘business software’*, anyway? Is it software that I use at work, or that I use on my device that I take home? Is it still business software then, when I’m lying on my couch at home trying to approve that report?

Check it out, the results aren’t breath-taking…

Stop. Take a look around you. Outside your office maybe, or in your car. There are a million everyday things (or processes) you could improve with the incredible array of devices and software products at your disposal, inexpensively today. Do you honestly lever the full power and spectrum of both hardware and software available to you to create immersive experiences that I can connect with, and you can use to connect me to my information?

We’re right on the forefront of something incredible and all it takes for someone to revolutionise our connection with IT now is a truly ambitious interconnection of services (software) accessible in innovative ways through our devices.

And this, dear reader, is why I think you should care: because you’ve no doubt arrived at a similar conclusion; the concept of a union between devices and software isn’t really new. IT has always been about enabling people to do things we cannot do more easily alone or person-to-person.

You need to think about creating those end-to-end experiences, and take a 100,000ft view at the devices and services landscape to figure out what you could do to blow people’s minds. Innovate over a small business process, or transform an industry: I don’t care! Just innovate! Because when you’re in that mind set, you are your most creative, and you’re more likely to succeed.

And whether you agree or not with anything I’ve said, consider this:

You need your thinkers to be asking more “what if?”, not “what next”.

What If asks for innovation. What Next just begs for iteration.

Create, amaze, inspire; it’s easier today than it was just a few years ago.

Release Notes: Reading Traffic v1.3

Standard

I’ve just submitted an update to my Reading Traffic Info app for Windows Phone 8 and it’ll hopefully be available for download in the Windows Phone marketplace within the next week (subject to certification testing).

This new version, 1.3, focuses on some improvements to the overall app usability as well as the introduction of a new feature, the “favourites spy”, which lets you view just the latest thumbnails from your favourite cameras in a vertical list, that updates automatically.

In addition, now that the number of users is growing to a reasonable size (and crucially, the number of repeat users is pretty high), this version incorporates a number of backend features to allow me to collect anonymised telemetry about how the app is being used. This is especially useful when considering that I only get a few hours per week, maximum, during my otherwise ‘free time’, to maintain the app: so knowing where to invest that time is absolutely crucial!

Release Notes

  • Fixed some minor problems with 6 of the cameras in the feed; thanks for the error reports.
  • Updated the application bar on the main menu (added a ‘settings’ icon, and a ‘favourites spy’ icon)
  • Removed the ‘settings’ application bar menu item (because it’s now replaced with an icon button)
  • Created a ‘favourites spy’ feature, which shows you auto-refreshing thumbnail images from all the cameras you’ve marked as ‘favourite’, in a single, vertically-scrolling list
  • Added landscape support to the camera detail page (click to view a specific camera, then rotate into landscape orientation to go full screen on the camera image).
  • Incorporated Flurry analytics to help me figure out which cameras are most popular among users, as well as where people are when they use the app (i.e. proximity to the cameras they’re looking at). This should help me design new features that make the app easier to use, and more useful. If you want to disable this functionality, you have to disable location services (which you can do either in the app itself (tap settings > location services) or in the phone O/S itself.

Reading Traffic Info V1.2 for Windows Phone 8 now out!

Standard

My regular readers will know that a few weeks ago, I released an app called “Reading Traffic Info” to the Windows Phone 8 store (http://bit.ly/readingtrafficapp). In a nutshell, the app helps those who live in or around, or commute in or through Reading Borough, by connecting them to a near real-time feed of all of the borough’s traffic cameras.

I want to thank everyone who has downloaded the app so far, I’m surprised to have hit 466 downloads in just a few weeks for such a niche app. Proof that there must be demand for apps providing access to this information!

What’s new in version 1.2?

Firstly, I’ve added a ton of new cameras. Here’s the full list:

A329 (M) TVP
A33 Bennet Road
A33 Little Sea
A33 Relief Road
A33 Rose Kiln Lane
A4 Langley Hill
Castle Hill
Gosbrook Road
Grovelands Road
Henley Road (Lower)
Kings Road
M4 Jn 11
M4 Jn 11 Westbound
Queens Road
Whitley Street
Winnersh Crossroads

In addition, the app will now automatically check with my server every day to see if new cameras are available and include them in the camera list without the need to update the entire application. This works for corrections/alterations to existing camera metadata, too (for example, to correct latitude/longitude pairs or orientation/naming data).

Version 1.2 also has an updated user interface which sports a bit more colour, and the spacing on the camera listing has been increased further still to make selecting them a little easier:

2e08eace-1952-4ace-97fa-42f0c0eb5c53 759188ef-c956-4a12-b3ba-292de6417a51 e2a172d5-e580-4068-8088-c1f69154b9ef

I also added icons to denote favourite cameras, as well as a new button to report camera inaccuracies directly within the app (it’ll open up your email client with a pre-populated body detailing the camera you’re looking at, with a space for you to tell me what the problem is).

Remote Camera connection quality tolerance

Version 1.2 includes additional tolerances for the quality and reliability of the camera feeds operated by the council. If a camera is offline, or the council’s camera server went offline (as it did about a week ago), the app will now indicate that there has been a problem connecting to the camera. Additionally, I fixed a bug where the app would continually try to refresh an image from a camera every 5 seconds, regardless of whether it had successfully connected or not. To help mitigate image loading delays on slower mobile network connections, the refresh time has increased from 5 seconds to 10 seconds. While the latest image is loading, if a previous one was available it will remain on-screen.

I also fixed a problem with the camera images ‘flickering’ between refreshes.

Finally, additional tolerances were added into the app to detect the state of your phone’s internet connection and warn you when it is unavailable.

Continued thanks to everyone

Again, thanks to all those who have supported me by downloading and using the app, submitting feedback or helping me with the design itself.

What’s next?

There are lots of features on the horizon and I’m very much planning to continue developing the app in my spare time. Already on the cards is support for providing up-to-date car park status within the app (so you can decide which car park to head to in order to avoid the jams!) and also road works status.

If you’d like to make suggestions, see what’s planned or vote on new features, head over to my UserVoice community at http://rikp.uservoice.com.

Thanks!

Debugging Azure Web Roles with Custom HTTP Headers and Self-Registering HttpModules

Standard

For the past year-and-a-half, I’ve been helping customers to develop web applications targeting Windows Azure PaaS. Typically, customers ask lots of similar questions and these are usually because they’re faced with similar challenges (there really isn’t such a thing as a bad question). I’ve recently had to answer this very question a few times in succession, so I figured that makes it a good candidate for a blog post! As always, I’d love to get your feedback and if you find this tip useful I’ll try to share some more common scenarios soon.

The scenario I want to focus on here today is nice and quick. It’s a reasonably common scenario in which you’ve deployed a web application (let’s say, a WebAPI project) to Azure PaaS and have more than a handful of instances serving-up requests.

Sometimes it’s tricky to determine which role instance served-up your request

When you’re developing and testing, you quite often need to locate the particular node which issued a an HTTP response to the client.

When the total number of instances serving your application are low, cycling through one or two instances of a web role (and connecting up to them via RDP) isn’t a particular issue. But as you add instances you don’t typically know which server was responsible for servicing the request, thus you have more to check or ‘hunt through’. This can make it more difficult for you to quickly jump to the root of the problem for further diagnosis.

Why not add a custom HTTP header?

In a nutshell, one possible way to help debugging calls to an API via HTTP is to have the server inject a custom HTTP header into the response which emits the role instance ID. A switch in cloud configuration (*.cscfg) can be added which allows you to turn this feature on or off, so you’re not always emitting it. The helper itself (as you’ll see below) is very lightweight and you can easily modify it to inject additional headers/detail into the response. Also, emitting the role instance ID (i.e. 0, 1, 2, 3 …) is preferable to emitting the qualified server name, for security reasons, and doesn’t really give too much info away to assist a would-be attacker.

How’s it done?

It’s rather simple and quick, really. And, you can borrow the code below to help you out but do remember to check it meets your requirements and test it thoroughly before chucking it into production! We start by creating an HTTP module in the usual way:

public class CustomHeaderModule : IHttpModule
    {

        public static void Initialize()
        {
            HttpApplication.RegisterModule(typeof(CustomHeaderModule));
        }

        public void Init(HttpApplication context)
        {
            ConfigureModule(context);
        }

        private void ConfigureModule(HttpApplication context)
        {
            // Check we're running within the RoleEnvironment and that
            // our configuration setting ("EnableCustomHttpDebugHeaders") is present. This is our "switch", effectively...
            if (RoleEnvironment.IsAvailable && RoleEnvironment.GetConfigurationSettingValue("EnableCustomHttpDebugHeaders"))
            {
                context.BeginRequest += ContextOnBeginRequest;
            }
        }

        private void ContextOnBeginRequest(object sender, EventArgs eventArgs)
        {
            var application = (HttpApplication)sender;
            var response = application.Context.Response;

            // Inject custom header(s) for response

            var roleName = RoleEnvironment.CurrentRoleInstance.Role.Name;
            var index = RoleEnvironment.Roles[roleName].Instances.IndexOf(RoleEnvironment.CurrentRoleInstance);
            response.AppendHeader("X-Diag-Instance", index.ToString());
        }

        public void Dispose()
        {
        }
    }

What we’ve got here is essentially a very simple module which injects the custom header, “X-Diag-Instance”, into the server’s response. The value for the custom header will be the index of the instance of the role in the Instances collection property of Role.

Deploying the module

Then, we want to add a little magic to have the module self-register at runtime (sure, you can put this in config if you really want to). This is great, because you could put the module into a shared library and then simply have it register itself into the pipeline automatically. Of course, you could actually substitute the config switch for a check to determine whether the solution is in debug or release mode, too (customise it to fit your needs).

To do the self-registration, we rely on a little known but extremely useful ASP.NET 4 extensibility feature called PreApplicationStartMethod. Decorating the assembly with this attribute allows the .NET framework to discover your module and auto-register it:

[assembly: PreApplicationStartMethod(typeof(PreApplicationStartCode), "Start")]
namespace MyApplication
{
    public class PreApplicationStartCode
    {
        public static void Start()
        {
            Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(CustomHeaderModule));
        }
    }

    public class CustomHeaderModule : IHttpModule
    {
      // ....
    }
}

This approach also works well for any custom headers you want to inject into the response, and a great use case for this would be to emit custom data you want to collect as part of a web performance or load test.

I hope you find this little tip and the code snippet useful, and thanks to @robgarfoot for the pointer to the super useful self-registration extensibility feature!

Designing my first Netduino/Arduino Shield for the Quadcopter

Standard

Of the many design challenges for the quadcopter project I’m currently working on, figuring out how to enable the Netduino to control up to nine PWM outputs (for servo control) was certainly one of them. Fortunately, the smart folks at Polulu have created a handy micro-sized board to do just the job. One drawback is that servos tend to pull a lot of current, and I’d already fried my first Pololu control board due to a bit of over-zealous wiring.

The trouble is, standard servo wire plugs combined power, ground and signal all into one neat package. This is great, if you are using your servos in a ‘normal’ way. However, mine are being driven under quite a lot of demand and through a Pololu servo control board, not an RX. In addition, as the power supply on the quadcopter is pretty punchy (60A!) and a little susceptible to temporary voltage drops or current peaks, I wanted to be able to provide a smooth, regulated supply to the Pololu itself, as well as the servos to which it is connected.

Problem

  • Futaba-style 3-pin servo connectors provide power, gnd and signal in a single connector:FutabaWire
  • The servo control board has 3-pin connectors, intended for providing power, ground and signal to each servo:

    0J48_200

  • I want to split the three-pin output from the servo control board (the right-hand side of the image) so that a dedicated, high-current power supply can provide the power to each servo, yet enabling the signal to be driven by the control board.

Solution

I’ve come up with what I believe to be a neat solution to this problem, as well as one that I haven’t yet blogged about: the need to create a power distribution board and sensor input board for each of the quadcopter’s many on-board sensors. My solution is based on the production of a custom-built Netduino shield, which combines two regulated power supplies and a ‘splitter array’ for separating power and signal from the servo cable, as well as space to mount the Pololu controller.

In this way, the Pololu control board will be connected to the three-pin array in the centre of my shield. The signal pin will for each corresponding servo will then be connected to just the appropriate signal pin on the Pololu control board. Thus, a separate power supply for the servos is possible without any cable ‘hackery’.

My first draft design is below:

Quadcopter autopilot board_pcb

In this first revision, I have simply exposed the Netduino’s pins through the pin-compatible holes at the top and bottom of the board. It also features separate pins for D0+1 (COM1) and D2+3 (COM2), and solder pads for the high current power supply. 4600uF electrolytic capacitors are used to reduce noise and further stabilise the supply beyond the regulators as demand from the servos is likely to change very rapidly. In addition, a 5V separate power supply is provided next to the splitter array to power the Pololu board.

Jobs left to do!

This board still needs quite a bit of work. Firstly, the silkscreen layer for the bottom row of pins hasn’t yet been added, and I may also actually breakout the pins into screw terminals to make it easier to hook up my various sensors to the board. If I go down this route, I’ll need to change the board layout considerably as I’ll need to preserve a 3cmx3cm area for the Pololu control board. I’ll also need to add screwholes (for my board as well as the Pololu, so that it can be mounted securely) and then finally, I’ll need to make one and check that it works as planned!

I plan to release the part to the Fritzing gallery once I’m happy with it, so that others may use it.

Autonomous Immersive Quadcopter – Build Log

Standard

It’s been a while since my last post back in December (all about Windows Azure, you can read about that here) and a lot has been going on in the interim. I’ve mainly been focused (as always) on work, but in my down time, I’ve been working on somethin’ a little special.

I’ve always been fascinated by things that fly: aeroplanes, helicopters, birds, bees, even hot air balloons and solar wings. Flight gives us an opportunity to view things from a different perspective; it opens a new world to explore.

The trouble is, as a human, I was born without the (native) ability to fly. And that’s always made me a little, well, sad.

A couple of years ago, I started toying with a model aeroplane and my goal at that point was to turn that into a UAV, like so many of the projects I’d seen online. I ended up dismissing the idea, for a couple of reasons: planes are pretty limited (manoeuvrability-wise), and unless you can fly yours incredibly high and incredibly fast, you’re a little limited to the types of cool things you can do. Plus, the open-source autopilots that are currently available are mainly all built using non-Microsoft technologies, and, being a “Microsoft guy”, I wanted to do something about that (let’s say it’s just for selfish purposes: I’m much more productive using Microsoft technologies than I am with something like C on the Arduino platform, and I have very limited time for this project).

So I’ve been working on building a custom quadcopter since January, and I’m very pleased with the results so far. It flies, and in this video you’ll see the first test flight. Toward the end, just before the ‘aerobatics’, I disable the automatic flight stabilisation remotely, which causes even more aerobatics. Anyway, the quadcopter was fun to build, and was a huge learning curve for me: and I really enjoyed the challenge of having to figure out all the flight dynamics, propeller equations, lift calculations and of course, the designing and building of the frame, electrical and radio systems.

But it’s not awesome enough yet, not anywhere near it! In fact, check out some of the plans:

  1. I’m currently building a three-axis motorised gimbal that will fit underneath the main airframe. It is going to be connected to an Oculus Rift virtual reality stereoscopic headset, which will relay movements of the wearer’s head to the servos on the gimbal; thus enabling you to ‘sit’ and experience flight from within the virtual cockpit. My colleague, Rob G, is currently building the most awesome piece of software to power the Oculus’ dual stereoscopic displays, while I finish designing and building the mount and video transmission system.
  2. Cloud Powered AutoPilot and Flight Command. That’s right: using Windows Azure, I will provide command and control functionality using Service Bus and sophisticated sensor logging through Azure Mobile Services. Flight data and video will be recorded and shared real-time with authenticated users. Why? There’s nothing cooler than Windows Azure, except maybe something that flies around actually in the clouds, powered by the Cloud!

I don’t know where this project will end up taking me, but so far it’s taken me on some very interesting journeys. I’ve had to learn much more about:

  • Circuit design
  • Fluid dynamics
  • Thrust and vector calculations
  • Power system design
  • Radio-control systems (on various frequencies: 5.8GHz, 2.4GHz, 433MHz and 968MHz) and the joys of trying to tame RF energy using antennae
  • Soldering

… The list goes on!

Current Activity

I’m already in the progress of building a sensor network on the quadcopter. This comprises of:

  • 5 x Ultrasonic range finders (four mounted on each of the motor arms, one downward-facing)
  • 1 x Barometric pressure (for altitude and airspeed sensing, using pitot tubes)
  • 1 x 66-satellite GPS tracking module
  • 1 x Triple-axis accelerometer
  • 1 x Triple-axis gyroscope

The current plan is to use a Netduino to interface directly with the sensors, and transform all of the sensor data into a proprietary messaging standard, which will be emitted via the I2C interface using a message bus architecture. In this way, the Netduino is able to create ‘virtual’ sensors, too, such as:

  • Altitude (based on either the downward-facing ultrasonic sensor, or the barometric pressure sensor; whenever the quad moves out of range of the ultrasonic sensor)
  • Bearing
  • Velocity (allowing selection of air/ground speed)

The Netduino is an amazing device, however it doesn’t have sufficient capacity or processing power on-board to interface with the radio control receiver (which receives pitch, roll, yaw, throttle and other inputs from my handheld transmitter). For this activity, I’m going to use a Raspberry Pi (running Mono!). The RPi apparently features the ability to turn GPIO pins into PWM-capable pins (either generating, or interpreting), which is exactly what I need.  The Netduino will output the sensor data to the RPi, which will be running the ‘autopilot’ system (more on the planned autopilot modes in a later post).

It’ll be the job of the Raspberry Pi to interpret sensor data, listen to commands it has received from the handheld transmitter on the ground, and decide what action to take, based on the requested input and the sensor data, and the currently-selected autopilot mode. Sounds simple, but it’s anything but!

If you’re not interested in the technical details, you can follow this project on hoverboard.io. Thanks for reading.