Autonomous Immersive Quadcopter – Build Log

Standard

It’s been a while since my last post back in December (all about Windows Azure, you can read about that here) and a lot has been going on in the interim. I’ve mainly been focused (as always) on work, but in my down time, I’ve been working on somethin’ a little special.

I’ve always been fascinated by things that fly: aeroplanes, helicopters, birds, bees, even hot air balloons and solar wings. Flight gives us an opportunity to view things from a different perspective; it opens a new world to explore.

The trouble is, as a human, I was born without the (native) ability to fly. And that’s always made me a little, well, sad.

A couple of years ago, I started toying with a model aeroplane and my goal at that point was to turn that into a UAV, like so many of the projects I’d seen online. I ended up dismissing the idea, for a couple of reasons: planes are pretty limited (manoeuvrability-wise), and unless you can fly yours incredibly high and incredibly fast, you’re a little limited to the types of cool things you can do. Plus, the open-source autopilots that are currently available are mainly all built using non-Microsoft technologies, and, being a “Microsoft guy”, I wanted to do something about that (let’s say it’s just for selfish purposes: I’m much more productive using Microsoft technologies than I am with something like C on the Arduino platform, and I have very limited time for this project).

So I’ve been working on building a custom quadcopter since January, and I’m very pleased with the results so far. It flies, and in this video you’ll see the first test flight. Toward the end, just before the ‘aerobatics’, I disable the automatic flight stabilisation remotely, which causes even more aerobatics. Anyway, the quadcopter was fun to build, and was a huge learning curve for me: and I really enjoyed the challenge of having to figure out all the flight dynamics, propeller equations, lift calculations and of course, the designing and building of the frame, electrical and radio systems.

But it’s not awesome enough yet, not anywhere near it! In fact, check out some of the plans:

  1. I’m currently building a three-axis motorised gimbal that will fit underneath the main airframe. It is going to be connected to an Oculus Rift virtual reality stereoscopic headset, which will relay movements of the wearer’s head to the servos on the gimbal; thus enabling you to ‘sit’ and experience flight from within the virtual cockpit. My colleague, Rob G, is currently building the most awesome piece of software to power the Oculus’ dual stereoscopic displays, while I finish designing and building the mount and video transmission system.
  2. Cloud Powered AutoPilot and Flight Command. That’s right: using Windows Azure, I will provide command and control functionality using Service Bus and sophisticated sensor logging through Azure Mobile Services. Flight data and video will be recorded and shared real-time with authenticated users. Why? There’s nothing cooler than Windows Azure, except maybe something that flies around actually in the clouds, powered by the Cloud!

I don’t know where this project will end up taking me, but so far it’s taken me on some very interesting journeys. I’ve had to learn much more about:

  • Circuit design
  • Fluid dynamics
  • Thrust and vector calculations
  • Power system design
  • Radio-control systems (on various frequencies: 5.8GHz, 2.4GHz, 433MHz and 968MHz) and the joys of trying to tame RF energy using antennae
  • Soldering

… The list goes on!

Current Activity

I’m already in the progress of building a sensor network on the quadcopter. This comprises of:

  • 5 x Ultrasonic range finders (four mounted on each of the motor arms, one downward-facing)
  • 1 x Barometric pressure (for altitude and airspeed sensing, using pitot tubes)
  • 1 x 66-satellite GPS tracking module
  • 1 x Triple-axis accelerometer
  • 1 x Triple-axis gyroscope

The current plan is to use a Netduino to interface directly with the sensors, and transform all of the sensor data into a proprietary messaging standard, which will be emitted via the I2C interface using a message bus architecture. In this way, the Netduino is able to create ‘virtual’ sensors, too, such as:

  • Altitude (based on either the downward-facing ultrasonic sensor, or the barometric pressure sensor; whenever the quad moves out of range of the ultrasonic sensor)
  • Bearing
  • Velocity (allowing selection of air/ground speed)

The Netduino is an amazing device, however it doesn’t have sufficient capacity or processing power on-board to interface with the radio control receiver (which receives pitch, roll, yaw, throttle and other inputs from my handheld transmitter). For this activity, I’m going to use a Raspberry Pi (running Mono!). The RPi apparently features the ability to turn GPIO pins into PWM-capable pins (either generating, or interpreting), which is exactly what I need.  The Netduino will output the sensor data to the RPi, which will be running the ‘autopilot’ system (more on the planned autopilot modes in a later post).

It’ll be the job of the Raspberry Pi to interpret sensor data, listen to commands it has received from the handheld transmitter on the ground, and decide what action to take, based on the requested input and the sensor data, and the currently-selected autopilot mode. Sounds simple, but it’s anything but!

If you’re not interested in the technical details, you can follow this project on hoverboard.io. Thanks for reading.

Advertisements

Open-source FTP-to-Azure blob storage: multiple users, one blob storage account

Standard

A little while ago, I came across an excellent article by Maarten Balliauw in which he described a project he was working on to support FTP directly to Azure’s blob storage. I discovered it while doing some research on a similar concept I was working on. At the time of writing this post though, Maarten wasn’t  sharing his source code and even if he did decide to at some point soon, his project appears to focus on permitting access to the entire blob storage account. This wasn’t really what I was looking for but it was very similar…

My goal: FTP to Azure blobs, many users: one blob storage account with ‘home directories’

I wanted a solution to enable multiple users to access the same storage account, but to have their own unique portion of it – thereby mimicking an actual FTP server. A bit like giving authenticated user’s their own ‘home folder’ on your Azure Blob storage account.

This would ultimately give your Azure application the ability to accept incoming FTP connections and store files directly into blob storage via any popular FTP client – mimicking a file and folder structure and permitting access only to regions of the blob storage account you determine. There are many potential uses for this kind of implementation, especially when you consider that blob storage can feed into the Microsoft CDN…

Features

  • Deploy within a worker-role
  • Support for most common FTP commands
  • Custom authentication API: because you determine the authentication and authorisation APIs, you control who has access to what, quickly and easily
  • Written in C#

How it works

In my implementation, I wanted the ability to literally ‘fake’ a proper FTP server to any popular FTP client: the server component to be running on Windows Azure. I wanted to have some external web service do my authentication (you could host yours on Windows Azure, too) and then only allow each user access to their own tiny portion of my Azure Blob Storage account.

It turns out, Azure’s containers did exactly what I wanted, more or less. All I had to do was to come up with a way of authenticating clients via FTP and returning which container they have access to (the easy bit), and write an FTP to Azure ‘bridge’ (adapting and extending a project by Mohammed Habeeb to run in Azure as a worker role).

Here’s how my first implementation works:

A quick note on authentication

When an FTP client authenticates, I grab the username and password sent by the client, pass that into my web service for authentication, and if successful, I return a container name specific to that customer. In this way, the remote user can only work with blobs within that container. In essence, it is their own ‘home directory’ on my master Azure Blob Storage account.

The FTP server code will deny authentication for any user who does not have a container name associated with them, so just return null to the login procedure if you’re not going to give them access (I’m assuming you don’t want to return a different error code for ‘bad password’ vs. ‘bad username’ – which is a good thing).

Your authentication API could easily be adapted to permit access to the same container by multiple users, too.

Simulating a regular file system from blob storage

Azure Blob Storage doesn’t work like a traditional disk-based system in that it doesn’t actually have a hierarchical directory structure – but the FTP service simulates one so that FTP clients can work in the traditional way. Mohammed’s initial C# FTP server code was superb: he wrote it so that the file system could be replaced back in 2007 – to my knowledge, before Azure existed, but it’s like he meant for it to be used this way (that is to say, it was so painless to adapt it one could be forgiven for thinking this. Mohammed, thanks!).

Now I have my FTP server, modified and adapted to work for Azure, there are many ways in which this project can be expanded…

Over to you (and the rest of the open source community)

It’s my first open source project and I actively encourage you to help me improve it. When I started out, most of this was ‘proof of concept’ for a similar idea I was working on. As I look back over the past few weekends of work, there are many things I’d change but I figured there’s enough here to make a start.

If you decide to use it “as is” (something I don’t advise at this stage), do remember that it’s not going to be perfect and you’ll need to do a little leg work – it’s a work in progress and it wasn’t written (at least initially) to be an open-source project. Drop me a note to let me know how you’re using it though, it’s always fun to see where these things end up once you’ve released them into the wild.

Where to get it

Head on over to the FTP to Azure Blob Storage Bridge project on CodePlex.

It’s free for you to use however you want. It carries all the usual caveats and warnings as other ‘free open-source’ software: use it at your own risk.

If you do use it and it works well for you, drop me an email and it’ll make me happy. 🙂

How to speed up your ASP.NET web application

Standard

If your web site is slow, it’s annoying to your customers. It’s annoying because nobody likes to wait: we wait all day in the physical world: in queues at the shops, at the restaurant and even on the telephone. We’re always looking for ‘faster’, because in our web consumer minds, “faster equals better”. In my personal experience as a software developer, most users share at least one principle:

Better responsiveness equals a better product
– A. Customer

If your application is simple and responsive, people will use it. If it is clunky and slow to load, people are forced to wait. Think of your application (it doesn’t matter if it’s a web or a desktop application) as a racing car. As the manufacturer of that car, you’ll want customers to come and test drive it. You’ll hope that they’ll fall in love with it after driving it, and want to buy it. If that test drive is a good experience, they’ll hopefully part with some of their hard earned cash to pay for it – and bingo, you’ve done what you needed to do: make the sale. 

The same principle applies to software: if you deliver a fast, responsive application with a quick user interface, your users are more likely to think you’ve built a better product – (whether that’s right or technically wrong), because to Mr and Mrs User, a slow application is a bad one.

You can optimise your web site in just a few steps

As an ASP.NET developer, here’s a look (or a reminder) at some of the things you can look at doing before deciding it’s time to dig under the hood and start to make more fundamental changes in your application:

Disable debugging in your web.config

When you release an application in debug mode, ASP.NET forces certain files to be sent to the client with each request, instead of allowing the browser to cache them. Most people forget to switch debug mode off when they release. This creates an overhead for your server, and a longer wait for the client. Debug mode also causes other changes in your web application: think of it as a bloaty way to release because it has to include data and various hooks to allow you to debug the application that aren’t necessary in order to run it:

<compilation debug="false"/>

You’ll find the above line in your web.config file.

Enable IIS Request Compression

Request compression is a feature of Internet Information Services 6 and above that causes content to be compressed before transmission to the client, and then decompressed by the browser. Most modern browsers support this, and enabling it requires no modification to your web site at all. Do bear in mind that request compression will force your web server to work harder because it has to first compress data before sending it. This creates a small spike in CPU usage, for low to medium traffic web sites that really need a performance boost the extra CPU usage will more than likely be absorbed just fine.

In Internet Information Services 6:

  1. Launch IIS Manager
  2. Right-click the “Web Sites” node
  3. Click “Properties”
  4. Select the “Service” tab
  5. Tick “Compress application files” and “Compress static files”. Be sure to specify a temporary directory with sufficient free resources and consider adding a maximum limit to the temporary directory size.
  6. Click “Apply”
  7. Click “OK”

Request compression isn’t for everybody – be sure to weigh the pro’s and con’s for your particular environment.

Use page output caching

By default, IIS thinks that your ASP.NET page is dynamic. In many applications, however, not all the pages actually are. Even if they do rely on a database for content, oftentimes it’s not necessary to hit the database on each request to the page. Output caching can be enabled on a particular page by adding one line of code to the top of your ASPX file. It is a directive that informs .NET to keep a copy of the rendered page, and serve the copy (rather than the original) from disk each time it is called. This would include, for example, any database generated content from controls on the page itself, or any embedded user controls.

<%@ OutputCache Duration="10" VaryByParam="none"%>

Page output caching can be an extremely effective way to improve your web site’s performance and responsiveness. It’s a lot more flexible than I’ve explained here, and you should be aware that there are all manner of ways in which you can control the cached version of the page (for instance, you can modify the directive to have different cached versions of the page based on a URL parameter). For more information, see the MSDN documentation.

Next steps

When you’ve done these things, if your application could still use a boost, it’s time to start profiling. You’ve tried the ‘quick fixes’ – the 10 minute jobs that are more-than-likely going to make things better, but there’s always a chance the problem isn’t with your application per sé. The next step is to figure out what’s causing the problem. First identify the scope: is it limited to one user, or a bunch of users in a particular geographic region, or is it everybody? If it’s only a small bunch of people, it might be that your ISP is having routing issues and you need do nothing at all. On the other hand, you might find that everyone is affected by the issue.

In that case what you need to do is to investigate where your bottleneck is occurring. Is it your database? Is it your disks? Or is it, yes, hold on a second – more than likely it’s the things you’ve probably overlooked: your images and other media files.

Optimising your images

Many people, particularly in smaller teams, overlook image optimisation. Most image editing programs will optimise for you – and this can often reduce a file’s size anywhere between 5% and 20%, and sometimes more. With today’s media rich sites, look at what you can do to ease the burden.

Using a content delivery network

As your web site grows ever more popular, sometimes the best way to get a performance boost is to let somebody else handle delivery of your ‘resource files’ – these are your static images, scripts, movies, SWF files, etc. One option is to purchase more bandwidth from your supplier. Another is to enlist the support of a Content Delivery Network – kind of like a private, global internet with public endpoints close to your customers.

The benefit of a CDN is that you are effectively outsourcing the delivery of your static files onto another – usually much faster – network. Often this will result in an ability for your server to handle more connections than before, since it no longer has to worry about serving up the big files over and over again.

Going direct to one of the big networks can cost anywhere from about $1,000 per month upwards, but there are companies who provide full CDN integration for a fraction of the price.

 Good luck with your web site optimisation and please feel free to leave comments and tips for others.