Creating amazing experiences for your customers through powerful combinations of devices and software

Standard

… and why “what if” are two words you should care about

Today’s consumers increasingly focus on their experience of your products.

I’m not talking about UI (how cool your interface is), or what functionality you provide. Good UX design is a factor, but what I’m talking about is something that arguably goes beyond your app, beyond the device I use it on, and shapes and colours the overall experience I have of your company both now, and into the future.

You can think of it as your digital first impression (an ‘experience fingerprint’) – and it lasts.

To understand what I mean, we need think more about what technology enables us to achieve in our professional and personal lives today, shifting our focus away to supporting our business and in turn, focusing on what we’ve always done.

Think more ‘sci-fi’!

Five years ago some of the crazy things we can do today were simply impossible, or way too expensive to be viable. Failing that, the tooling or backend services weren’t there to truly help us realise those visions.

We take IT for granted today, and we forgot that it was always meant to be an enabler and instead we often treat it is a servant that we gradually just ask and expect more and more of.

Take Bob, for example. Bob wants to email a report to Sharon in Finance, after he’s reviewed it now that Jane added the sales figures Bill pulled out of the sales order system earlier.

“Machine! Here’s some data, munge it into a report for me and email it to Sharon in Finance”.

We ‘innovate’ today by making that processing activity faster, or by supporting more simultaneous processing. And that’s great and it has a place – but what have we achieved there for Bob? Does he care that we’ve put the collective IT advancement of the past few years (probably at considerable cost) to such good use that he gets his report 2 seconds quicker than before? Maybe not…

What if we offered to read Bob just the changes since he last saw it, because he’s driving home from the office and wanted to set off early to avoid the traffic because Cortana warned him he’d not make it on time to pick-up his kids from school? Boom. We just transformed (a small, but impactful) part of Bob’s life.

So we start to see that great devices play into our experiences – they’re portals that can be our proverbial best friends, or the mother-in-law you have to invite to the party but don’t really want to. Do you want your business to be that mother-in-law, or my best friend? Because I can tell you for sure that I’m less likely to ditch my best friend than someone who isn’t when push comes to shove I’m making my purchasing decisions.

And therefore, we introduce the notion of connection, more specifically, emotional connection. Great devices amplify the kinetic experience (and therefore the emotional bond) I make with your products, but only if yours ‘feels’ at home on that device. How your product feels to me suddenly becomes less about what it looks like and how it operates but how it ‘feels’ to fling files around on, between devices both in your ecosystem and out. How I can consume and produce on that device matters equally to what I can consume, or produce.

In tomorrow’s world, the end-to-end experience of your product should be a first class citizen on your product backlog because in order to win the battle for market share we shouldn’t be competing based on tick-the-box features alone. Ever wondered sometimes why the ‘inferior’ (feature-wise) alternatives seem to do better than yours? Sure, it could simply be they did a better job of marketing. But what’s marketing if it’s not the attempt to influence a purchasing decision by painting a picture of what it could be like to own that product? That’s emotional. And you need me to feel good about your product before I buy it or recommend it to others. Isn’t that customer experience?

You see, consumers make choices with their hearts and much less so with their minds (ask any car dealer), and we’d be foolish to think that our ‘consumer mindset’ isn’t following us to the workplace, where we tend to make larger more financially impactful decisions, than when we’re at home. We think less about consequence of making the wrong decision when we’re in the consumer mindset because we’re more focused on the promise of having it. Companies that have a great ecosystem and offer amazing experiences to their customers (their consumers) are therefore in a much better position to exploit the consumer mindset. And we can’t do that if we’re stuck in what I call the ‘business software mentality’ of 1995, which really isn’t that uncommon: we just use new tools to knock-out similar stuff (competing on similar levels but through new channels) and with more polish and speed. Is that innovation?

What is ‘business software’*, anyway? Is it software that I use at work, or that I use on my device that I take home? Is it still business software then, when I’m lying on my couch at home trying to approve that report?

Check it out, the results aren’t breath-taking…

Stop. Take a look around you. Outside your office maybe, or in your car. There are a million everyday things (or processes) you could improve with the incredible array of devices and software products at your disposal, inexpensively today. Do you honestly lever the full power and spectrum of both hardware and software available to you to create immersive experiences that I can connect with, and you can use to connect me to my information?

We’re right on the forefront of something incredible and all it takes for someone to revolutionise our connection with IT now is a truly ambitious interconnection of services (software) accessible in innovative ways through our devices.

And this, dear reader, is why I think you should care: because you’ve no doubt arrived at a similar conclusion; the concept of a union between devices and software isn’t really new. IT has always been about enabling people to do things we cannot do more easily alone or person-to-person.

You need to think about creating those end-to-end experiences, and take a 100,000ft view at the devices and services landscape to figure out what you could do to blow people’s minds. Innovate over a small business process, or transform an industry: I don’t care! Just innovate! Because when you’re in that mind set, you are your most creative, and you’re more likely to succeed.

And whether you agree or not with anything I’ve said, consider this:

You need your thinkers to be asking more “what if?”, not “what next”.

What If asks for innovation. What Next just begs for iteration.

Create, amaze, inspire; it’s easier today than it was just a few years ago.

Advertisements

Autonomous Immersive Quadcopter – Build Log

Standard

It’s been a while since my last post back in December (all about Windows Azure, you can read about that here) and a lot has been going on in the interim. I’ve mainly been focused (as always) on work, but in my down time, I’ve been working on somethin’ a little special.

I’ve always been fascinated by things that fly: aeroplanes, helicopters, birds, bees, even hot air balloons and solar wings. Flight gives us an opportunity to view things from a different perspective; it opens a new world to explore.

The trouble is, as a human, I was born without the (native) ability to fly. And that’s always made me a little, well, sad.

A couple of years ago, I started toying with a model aeroplane and my goal at that point was to turn that into a UAV, like so many of the projects I’d seen online. I ended up dismissing the idea, for a couple of reasons: planes are pretty limited (manoeuvrability-wise), and unless you can fly yours incredibly high and incredibly fast, you’re a little limited to the types of cool things you can do. Plus, the open-source autopilots that are currently available are mainly all built using non-Microsoft technologies, and, being a “Microsoft guy”, I wanted to do something about that (let’s say it’s just for selfish purposes: I’m much more productive using Microsoft technologies than I am with something like C on the Arduino platform, and I have very limited time for this project).

So I’ve been working on building a custom quadcopter since January, and I’m very pleased with the results so far. It flies, and in this video you’ll see the first test flight. Toward the end, just before the ‘aerobatics’, I disable the automatic flight stabilisation remotely, which causes even more aerobatics. Anyway, the quadcopter was fun to build, and was a huge learning curve for me: and I really enjoyed the challenge of having to figure out all the flight dynamics, propeller equations, lift calculations and of course, the designing and building of the frame, electrical and radio systems.

But it’s not awesome enough yet, not anywhere near it! In fact, check out some of the plans:

  1. I’m currently building a three-axis motorised gimbal that will fit underneath the main airframe. It is going to be connected to an Oculus Rift virtual reality stereoscopic headset, which will relay movements of the wearer’s head to the servos on the gimbal; thus enabling you to ‘sit’ and experience flight from within the virtual cockpit. My colleague, Rob G, is currently building the most awesome piece of software to power the Oculus’ dual stereoscopic displays, while I finish designing and building the mount and video transmission system.
  2. Cloud Powered AutoPilot and Flight Command. That’s right: using Windows Azure, I will provide command and control functionality using Service Bus and sophisticated sensor logging through Azure Mobile Services. Flight data and video will be recorded and shared real-time with authenticated users. Why? There’s nothing cooler than Windows Azure, except maybe something that flies around actually in the clouds, powered by the Cloud!

I don’t know where this project will end up taking me, but so far it’s taken me on some very interesting journeys. I’ve had to learn much more about:

  • Circuit design
  • Fluid dynamics
  • Thrust and vector calculations
  • Power system design
  • Radio-control systems (on various frequencies: 5.8GHz, 2.4GHz, 433MHz and 968MHz) and the joys of trying to tame RF energy using antennae
  • Soldering

… The list goes on!

Current Activity

I’m already in the progress of building a sensor network on the quadcopter. This comprises of:

  • 5 x Ultrasonic range finders (four mounted on each of the motor arms, one downward-facing)
  • 1 x Barometric pressure (for altitude and airspeed sensing, using pitot tubes)
  • 1 x 66-satellite GPS tracking module
  • 1 x Triple-axis accelerometer
  • 1 x Triple-axis gyroscope

The current plan is to use a Netduino to interface directly with the sensors, and transform all of the sensor data into a proprietary messaging standard, which will be emitted via the I2C interface using a message bus architecture. In this way, the Netduino is able to create ‘virtual’ sensors, too, such as:

  • Altitude (based on either the downward-facing ultrasonic sensor, or the barometric pressure sensor; whenever the quad moves out of range of the ultrasonic sensor)
  • Bearing
  • Velocity (allowing selection of air/ground speed)

The Netduino is an amazing device, however it doesn’t have sufficient capacity or processing power on-board to interface with the radio control receiver (which receives pitch, roll, yaw, throttle and other inputs from my handheld transmitter). For this activity, I’m going to use a Raspberry Pi (running Mono!). The RPi apparently features the ability to turn GPIO pins into PWM-capable pins (either generating, or interpreting), which is exactly what I need.  The Netduino will output the sensor data to the RPi, which will be running the ‘autopilot’ system (more on the planned autopilot modes in a later post).

It’ll be the job of the Raspberry Pi to interpret sensor data, listen to commands it has received from the handheld transmitter on the ground, and decide what action to take, based on the requested input and the sensor data, and the currently-selected autopilot mode. Sounds simple, but it’s anything but!

If you’re not interested in the technical details, you can follow this project on hoverboard.io. Thanks for reading.

My Home Tech: Summer 2011 Roundup

Standard

A long, long time ago, in a blog post far, far away, I documented some of my home tech in a piece that described how it all connected together. The article actually focused on my home network equipment, but I figured it would be useful to document the rest of the kit so that I can look back on it in a few years and marvel at how outdated it all was.

In a move which is hopefully slightly more interesting than my outrageously poor opening line, I figured it would be fun to expand out and showcase how I actually use some of this technology in my day job and my home life because I enjoy reading about the interesting things others have done, so perhaps others out there will appreciate this!

Highlights

Before I can kick-off any mini series style articles, let’s set the foundations:

  • Gigabit Ethernet cabled throughout
  • Netgear GS605 Gigabit Switch
  • Connectix Home Network Cabinet
  • Connectix Home Network Patch Panel (8 RJ45 + 4 telecoms)
  • HP ProLiant N36L MicroServer (4TB storage)

Infrastructure

Our apartment is cabled using gigabit ethernet, with at least two outlets in each bedroom plus four in my home office, and four in the living room behind the TV. This all terminates back to a Conectix Home Network Patch Panel which provides 8 termination points for each of the sockets, plus 4 telecoms connections which can be used to route the BT line to any of the RJ45 outlets scattered around. The network sockets all terminate back to a Netgear GS605 Unmanaged Gigabit Ethernet Switch, which sits inside my Connectix Home Network Cabinet.

I really like having the flexibility to swap-out any active hardware or install new stuff without any hassle, plus having the freedom to route connectivity about as I need it is pretty neat.

Network Cabinet Setup

Energy Metering

Within the network cabinet is a Current Cost Envi and Current Cost Bridge, although I have to say that I am not overly impressed with the bridge’s capabilities since it connects only to the my.currentcost.com web site, which although powered by Pachube, locks all your data away.

In the coming months, I will hopefully be announcing my revised ‘Arduinometer’ project, which is now based on the new Netduino Plus. This open-source platform will be capable of reading simultaneously from different energy meters, including gas, electricity and water.

Connectivity

Right now, I’m using BT as my broadband provider. Until their Infinity service is available in my area (September 2011 is the ETA – yikes!), I’m using their Total Broadband package. As you can see, I get relatively decent connectivity, but I’d obviously like to improve on that and I have high hopes for the BT Infinity Service (when it becomes available):

What matters most to me is reliability: as I work from home, I need a robust connection that’s there when I need it.

Servers & Storage

At the heart of my home network is a new HP ProLiant MicroServer N36L with 4TB of storage capacity. It runs the newest Windows Home Server 2011 operating system. All my client PCs (including my main development station) are backed-up daily by the server. That makes for a total of 4 PCs and 1 laptop being regularly backed up by the server, quietly and transparently.

As an added bonus, after a clean build of my development station, I took a full backup of it and that should allow me to do a complete rebuild simply by restoring the backup (which is actually an image of my machine) to go back to my ‘ideal state’ at any point in time.

Development rig

My main development machine is an Intel Core i7 870 @ 2.93GHz. It has 8GB of Corsair DDR3 RAM and runs Windows 7 Ultimate. I’ve fitted out the very lovely  Antec P90 case with a Corsair sealed liquid-cooling unit for keeping the processor nice and cool. Having a nice big radiator and 12cm fan means the RPMs can be kept low and that in turn reduces noise from the case.

Storage wise, I’m running a 120GB OCZ Colossus SSD as my primary drive with a 600GB Western Digital Green Caviar for storing most of the data. I also have a secondary 60GB Corsair SSD which has holds code I’m working on with Visual Studio 2010 (which, by the way, is installed on my primary SSD).

Driven by the NVidia GeForce GTX 275 primary graphics card are two 24″ LCD monitors, but I have aspirations to update those this year to a couple of IIyama 27″ panels. I also have a cheaper standard graphics card driving a 3rd 21″ CTX LCD monitor on which Twitter sits (yes, it is an addiction – live with it).

Samsung Navibot SR8855 First Look

Standard

Yesterday, after spending a few hours researching, I made a little bit of an impulse decision and purchased a Samsung Navibot SR855 (a robotic vacuum cleaner). I figured that since I don’t like hoovering (and neither does my girlfriend), this could be the ‘ultimate’ gadget purchase that both of us can enjoy. Needless to say, at £338, my girlfriend wasn’t particularly impressed when I first broke the news.

Navibot's self-charging dock

“But we need one, darling!”

Naturally, the first thing you have to do after any impulse purchase is convince other people that you had a good reason for doing so, and that what you’ve bought will actually be suitable. At least, someone with my reputation for buying just about anything that has a plug on it has to anyway!

Hoovering isn’t a particularly fun chore, and with two cats and hardwood flooring everywhere, this is a task we have to repeat fairly often. The Samsung Navibot SR8855 comes with an on-board scheduling feature which means you can program it to wake up daily and go to work. Potentially, this could save us 20 minutes per day and over a full week, that’s over two hours – a full five days a year!

So, does it work?

Having just spent the morning assembling IKEA flat-pack furniture, the floors were covered with sawdust and other general packaging mess. Plus, a few days’ worth of cat fluff. It was time to put the Navibot to work.

After charging for 90 minutes (the unit is supplied at an almost empty charge), you simply press the “auto” button on the Navibot and it undocks itself and starts mapping your room. I chose the Navibot, rather than it’s main competitor the ‘Roomba’ from iRobot, because it appears to follow a much more logical pattern when cleaning your rooms than the Roomba does, which seems to do most of its navigation by bumping into things.

The Navibot has an upward-facing cameara which continuously takes pictures of your ceiling to determine the layout of your room, in connection with some distance ranging and kinetic sensors mounted on the first 180 degrees of the unit.

In just a minute or two it appeared to have figured out exactly where it was, and started linearly moving backward and forward around the room picking up cat fluff and sawdust neatly over every section it covered.

Carpets

As I mentioned above, we don’t have carpet, and I imagine the Navibot wouldn’t perform very well at all on those. Being very quiet, it clearly doesn’t have a lot of power and relies mainly on the two counter-rotating triple-brush ‘arms’ at the front of the unit to guide surface dust and fluff into the main brushes at the rear of the unit, rather than using vast amounts of suction.

In our front room, however, there is a fairly thick rug. It seems to navigate over it just fine initially, but it is a bit too thick for the unit to comfortably turn and it makes all manner of struggling sounds as it tries to desperately back away to firmer ground.

My verdict would be to avoid this if you have a carpeted house as it’ll only really pick up loose surface fluff and small debris. If you have hard flooring though, this thing is awesome!

Results

It cleaned our entire apartment, minus bathrooms, in approximately 20 minutes. The dust container was pretty full of all the usual things, indicating that it’d done quite a good job. In ‘auto’ mode, Navibot is apparently ‘afraid’ of walls, so it leaves about a 5cm margin around each wall where it doesn’t clean very effectively, relying instead on the exterior brushes to try to reach corner dust. It does, however, have an ‘edge’ mode, which you can run a few times per week if you want.

I suspect with a few uses it will get slightly more efficient at navigation, particularly since I couldn’t help myself from tinkering with its ‘manual mode’ occasionally, which lets you take control using the remote.

Once in manual mode, however, the unit stops remembering the route back to the charging dock, so you have to manually steer it back (if you ask it to return to dock on its own, it will fail miserably). On full auto though, it navigates back just fine.

Overall, I am pleased to say that this is one household gadget I’ve purchased that is actually pretty good at its job. Plus, my better half is also happy with the purchase, too – so it’s win/win. Having been programmed to wake up every morning at 6am and go to work, by the end of the week we should still have a cat-fluff and dust free floor area throughout our apartment. And neither of us will have lifted a finger.

~

No doubt I’ll be tweeting updates on the #navibot so follow me for the latest!

Hacking an external antenna on to a Thomson SpeedTouch TG585 v7 router

Standard

In this post, I will show you how to dismantle your Thomson SpeedTouch TG585 v7 router to allow connection of an external WiFi antenna. This is a very simple process requiring the removal of only four screws.

What you will need

  • 1 x Mini PCI U.FL to RP-SMA Pigtail Cable (~£1.50 each, I bought mine off eBay from this seller).
  • Set of needle-nose pliers
  • Set of mini Philips screwdrivers
  • … The external antenna you want to use! (I purchased this one for £13.99 from Maplin as it has a magnetic base, useful in my particular installation).

Step by step

1. Disconnect the power to your router, and unplug the power adapter from the mains supply.

2. Flip your router over, and remove the four plastic feet/pads from each of the four corners of the router.

3. Underneath the feet are four screws (one underneath each of the pads). Unscrew each one.

4. Lift the router off the table and gently give the base a tap – the grey top section should fall off. That’s the ‘lid’. If it doesn’t come off easily, gently prise it off with a flat head screwdriver; the operative word being gently. It’s not glued or wedged, it just might be a little tight.

5. Turn the unit over so you can see the main board.

Taking special care not to touch any of the solder points or components (especially any capacitors), ground yourself and then remove the antenna wire which is connected to the main board (connector circle below):

91edited

You might find a pair of needle nose pliers may help – but the clip is not particular tight or difficult to remove so just be wary of applying excessive force.

6. Now, unclip the existing ‘non-replaceable’ antenna by pinching the inside of the clip with pliers, while pulling the antenna. This will release the antenna and it should pull-out through the exterior of the case as follows:

93

7. Now, thread your new antenna pigtail cable through the case (where the old antenna used to go) so that the tiny clip is on the inside and the antenna connector (the larger connector) is on the outside. Connect the small end to the main board of the router in the same place you disconnected the old one from.

8. Pop the lid back in to place, turn the unit over and put the screws back in, followed by the sticky feet.

9. Now connect up your external antenna, and you’re all set.

Open-source FTP-to-Azure blob storage: multiple users, one blob storage account

Standard

A little while ago, I came across an excellent article by Maarten Balliauw in which he described a project he was working on to support FTP directly to Azure’s blob storage. I discovered it while doing some research on a similar concept I was working on. At the time of writing this post though, Maarten wasn’t  sharing his source code and even if he did decide to at some point soon, his project appears to focus on permitting access to the entire blob storage account. This wasn’t really what I was looking for but it was very similar…

My goal: FTP to Azure blobs, many users: one blob storage account with ‘home directories’

I wanted a solution to enable multiple users to access the same storage account, but to have their own unique portion of it – thereby mimicking an actual FTP server. A bit like giving authenticated user’s their own ‘home folder’ on your Azure Blob storage account.

This would ultimately give your Azure application the ability to accept incoming FTP connections and store files directly into blob storage via any popular FTP client – mimicking a file and folder structure and permitting access only to regions of the blob storage account you determine. There are many potential uses for this kind of implementation, especially when you consider that blob storage can feed into the Microsoft CDN…

Features

  • Deploy within a worker-role
  • Support for most common FTP commands
  • Custom authentication API: because you determine the authentication and authorisation APIs, you control who has access to what, quickly and easily
  • Written in C#

How it works

In my implementation, I wanted the ability to literally ‘fake’ a proper FTP server to any popular FTP client: the server component to be running on Windows Azure. I wanted to have some external web service do my authentication (you could host yours on Windows Azure, too) and then only allow each user access to their own tiny portion of my Azure Blob Storage account.

It turns out, Azure’s containers did exactly what I wanted, more or less. All I had to do was to come up with a way of authenticating clients via FTP and returning which container they have access to (the easy bit), and write an FTP to Azure ‘bridge’ (adapting and extending a project by Mohammed Habeeb to run in Azure as a worker role).

Here’s how my first implementation works:

A quick note on authentication

When an FTP client authenticates, I grab the username and password sent by the client, pass that into my web service for authentication, and if successful, I return a container name specific to that customer. In this way, the remote user can only work with blobs within that container. In essence, it is their own ‘home directory’ on my master Azure Blob Storage account.

The FTP server code will deny authentication for any user who does not have a container name associated with them, so just return null to the login procedure if you’re not going to give them access (I’m assuming you don’t want to return a different error code for ‘bad password’ vs. ‘bad username’ – which is a good thing).

Your authentication API could easily be adapted to permit access to the same container by multiple users, too.

Simulating a regular file system from blob storage

Azure Blob Storage doesn’t work like a traditional disk-based system in that it doesn’t actually have a hierarchical directory structure – but the FTP service simulates one so that FTP clients can work in the traditional way. Mohammed’s initial C# FTP server code was superb: he wrote it so that the file system could be replaced back in 2007 – to my knowledge, before Azure existed, but it’s like he meant for it to be used this way (that is to say, it was so painless to adapt it one could be forgiven for thinking this. Mohammed, thanks!).

Now I have my FTP server, modified and adapted to work for Azure, there are many ways in which this project can be expanded…

Over to you (and the rest of the open source community)

It’s my first open source project and I actively encourage you to help me improve it. When I started out, most of this was ‘proof of concept’ for a similar idea I was working on. As I look back over the past few weekends of work, there are many things I’d change but I figured there’s enough here to make a start.

If you decide to use it “as is” (something I don’t advise at this stage), do remember that it’s not going to be perfect and you’ll need to do a little leg work – it’s a work in progress and it wasn’t written (at least initially) to be an open-source project. Drop me a note to let me know how you’re using it though, it’s always fun to see where these things end up once you’ve released them into the wild.

Where to get it

Head on over to the FTP to Azure Blob Storage Bridge project on CodePlex.

It’s free for you to use however you want. It carries all the usual caveats and warnings as other ‘free open-source’ software: use it at your own risk.

If you do use it and it works well for you, drop me an email and it’ll make me happy. 🙂

OCZ Colossus Performance in AHCI vs IDE mode on the Intel DP55KG Desktop Board

Standard

For those of you who don’t know, I’m currently building a new high specification PC to handle the day-to-day rigours of software development. Like a lot of developers, I’ve decided to embrace the new SSD drive technology due to the massive performance increase to be had over traditional ‘plate drives’.  

I’ll provide more details about the PC’s specification soon, but I wanted to share some information about the various ways in which SSDs can be configured and how they affect performance. There are plenty of articles out there regarding SSD design, from a technical point of view, but I haven’t found any that have compared AHCI to IDE so that I could figure out which might give my Colossus the best chance of performing well. So, I decided to run my own basic test.  

Motherboard  

For this test, I am using the Intel DP55KG motherboard running BIOS version KGIBX10J.86A (17th Feb 2010). The DP55KG has 8 on-board SATA ports, six of which are powered via the board’s PCH, and two via an integrated Marvell 88E6145 chip (check out this post for an excellent review of the DP55KG).  

The SSD  

The drive under scrutiny here will be the OCZ Colossus 120GB. On paper, according to OCZ, the drive supports read and write speeds up to 260MB/sec, with sustained write around 140MB/s. This was the best drive I could find of this capacity in the price bracket.  

The 3.5" 'Colossus' SSD by OCZ

The test  

I wanted to know, “Which performs better, AHCI or IDE?” followed by “will IDE mode on the Marvell chip out perform IDE mode on the Intel PCH controller?”. To find out, I will use Passmark Performancetest 7.0 from Passmark. It provides a simple, standardised means of testing my SSD’s.  

  • To test the Intel PCH:
    • Configure the controller in IDE  mode
    • Install Windows 7 Ultimate
    • Install the latest Intel Chipset drivers
    • Restart
    • Install Passmark PerformanceTest 7.0
    • Run tests
    • Repeat these steps, but next time configure the controller in AHCI mode.
  • To test the Marvell Chip:
    • Connect the OCZ Colossus to port 0 on the board
    • Verify controller is in IDE mode (AHCI is not supported by the Marvell chip)
    • Install Windows 7 Ultimate
    • Install the latest Intel Chipset drivers
    • Restart
    • Install Passmark PerformanceTest 7.0
    • Run tests

Results  

Here are the raw test results. Note that the figures quoted are in MB/s.: 

Test Name Intel PCH (AHCI) Marvell (IDE) Intel PCH (IDE)
Disk – Sequential Read 109.9 94.3 111.3
Disk – Sequential Write 131.0 46.1 131.8
Disk – Random Seek + RW 41.8 37.2 41.4
Disk Mark 1022.7 642.1 1029.0
PassMark Rating 2372.6 1489.7 2387.3

Here’s the corresponding bar chart (click to enlarge):  

Intel DP55KG PCH IDE vs AHCI, vs Marvell IDE

 

 
Please note: in the results above, “This computer” refers to Intel PCH in IDE mode.
 
Conclusion
  
Based on the information above, for three identical Windows 7 installations on the same test PC, the result is fairly obvious: the Marvell controller won’t get you very good disk performance. Use ports 1 to 6 on the Intel DP55KG instead. IDE mode seems to outperform AHCI, if only marginally.
 
Having read various articles on the topic, I know it is unusual for anyone to actually experience the quoted performance speeds of the manufacturers. In this instance, though, I’m curious to learn if there’s anything else I can do to get closer to those, as in my tests – for sequential reads, for instnace – I’m well over 150MB/s slower than the manufacturer’s quoted speed (albeit still way faster than most 7,200RPM SATA “Plate” drives).
 
I’ll continue reading to see what else I can do but for now, at least, I’m happy with the added performance benefit and satisfied that I’ve chosen the best mode and chipset for my system.