0 Comments
Hi. 

I'm rubbish at blogging. I get motivated to do it every year, spend about 1 week looking for a new blogging platform of choice, get something working that I think "will do for now", then forget all about it, albeit with some occasional pangs of guilt.

But this time I think things might be different. This time I think I've got lots to talk about. 

The Brief


My work situation changed recently, and I found myself part of a very small startup, with a fairly new web application (All MVC), that we needed to take off of our current hosting setup, and move somewhere in "the cloud".

The old setup looked like this:

  • Hosted in 2 data centres, each with 2 very beefy web servers
  • a MySQL backend database that was replicated across both data centres
  • A bunch of back-end processes that ran on a windows service, scheduled with Quartz.NET
In all honesty, there was no need for the web app to be load balanced for performance sake - it was more a resilience thing, and the fact that we were able to release updates with no downtime that made it a nice choice. The web servers served other apps too, so this made sure we had enough resource.

Ok, so this was working well, but as part of a restructure, we needed to move onto something that wasn't hosted on our servers, and we wanted something that:

  • Was a good fit for our .NET tech stack, with good support and tooling
  • Was reasonably fast
  • Was fairly reliable and established
  • Was flexible
In addition, we were looking for something that looked the most exciting and future proof - I'm thinking support for features we might find we need in the future like good options for caching, location based load balancing, that kind of thing.

Well, as the title of this post alludes to, we chose Azure. And we haven't looked back since. Well not much.

So, in doing this, I learnt a lot about Azure, and a lot about tweaking a web app to work well on Azure. It's not just getting your app to work in a new PaaS. There are a lot of things to consider once you've settled on your platform choice - how do you handle things like resilience - your new platform won't behave quite as reliably as your trusty old server. You need to adjust your code to work well with this new world of PaaS, and I'm hoping to write up a lot of what I found out.

I'll start off with a summary of how I decided to structure what I had in azure (DB setup, azure setup, how to handle back end processes etc), and then I'll dive into more details into various aspects.

As I post new things, I'll add some links here to turn it into something of a series of posts.

Looking forwards to it :)


0 Comments

We have an existing web application that uses a mysql database. Being a fairly mature application in some parts, it uses ADO.NET directly for the data access, so you’ve got hand-crafted SQL being fired against the database.

We wanted to bring this more upto-date, and have a lightweight DAL that doesn’t stomp all over the existing stuff (so ideally complemenets it rather than entirely replace it). From personal experience of the last few years using Linq to SQL and Entity Framework, I wanted something that didn’t rely on Linq for the query language. I love Linq and it’s a joy to use, AND is a simple way to execute simple quueries against the database quickly. But when you start getting serious with your queries - even when you just throw a few LEFT JOINS in there, then you start to suffer - how many Stack Overflow posts are there in the vein of “How to write this SQL in Linq..”.. So, because of this and the experience built up firing SQL using an IDbConnection object, it looked like an ideal bunch of requirements for a Micro-ORM like Dapper or Massive. I discounted Massive because it maps to dynamic objects, and we already had a bunch of mostly-POCO objects that we wanted to map to, so enter Dapper!

Dapper!

Dapper is one of those little projects that makes you smile whwen you use it (or is that just me?). You get 2 extension methods on top of an IDbConnection, the main one being .Query<> that allows you to fire sql against the db and get the results into an object of your choising. As a noddy example:

var products = conn.Query<Product>("Select * from products where DateCreated = @DateCreated",new {DateCreated = date});

 

It’s pretty obvious what this line is going to do, the nice thing is that when you run it you get the results automatically mapped into a List of products. Nice.

The source for the Dapper class is a single file you can throw into your project. The code uses a reasonable amount of reflection internally to do it’s magic, and more than a sprinkiling of direct IL emmitting, which frankly scares me and impresses me in equal measures :)

Column Mapping

One thing that immediately became an issue for us was how dapper maps columns in the database to properties on the result object. By default it looks for matches, so a field productid in the database maps to a property PropertyId in the class. But for us, this straight mapping did not exist. For our database, the column names sometimes needed entirely remapping (e.g db.name –> Class.Title), or for the most part we just needed to resolve underscores in the database (e.g. db.product_id -> Class.ProductId). Fortunately, Dapper is pretty flexible, and it gets round this using an ITypeMap interface. You can implemeent this class and then hook this into Dapper. For us, we used a simple combination of allowing a class property to have an attribute to define the column mapping, and failiing the presence of that, we handle the removal of the underscores manually. If anyone is looking to do this, I’d recommend taking a look at the CustomPropertyTypeMap class in Dapper. That class allows you to define your own rules and then you can plug this into Dapper using SqlMapper.SetTypeMap (it’s explaned well in this Stack Overflow post)

Wrapping Dapper

Because dapper is a simple class that provides minimal ORM functionality, theres a lot that you don’t get, that you might want. For example, here’s some simple EF/L2S code that shows a common use case:

	var o = context.Produts.First(p => p.ProductId == 1); // Get a product from the db
	o.Name = "Updated"; // Update something
	context.SaveChanges(); // Tell the context to update the database with the change to the product.

 

Now, you can’t do that in Dapper. It’s not a fully fledged ORM like that. You’d have to do something like:

	var o = connection.Query<Product>("Select * from Prodcuts where Id = 1").First();
	o.Name = "Updated";
	oconnection.Execute("Update Procuct set Name = @Name where Id = 1",new {Name=o.Name});

 

In fact, that code is pointless, because the first 2 lines serve no purpose. But you get the point. Fourtunately, Dapper is designed to be used as a building block to allow you to create the kind of functionality you need. I didn’t want or need full blown change tracking or anything like that that EF just does for you. But I did want to be able to send updates/inserts to the DB without hand-crafting SQL all over the place. I also wanted a class that was a little bit more like the Context class that EF gives you, so that I could access all my db tables from them. This is where Dapper Rainbows came in for me…

Dapper Rainbows was written by Sam Saffron to wrap Dapper how he wanted to wrap it. He’s written a great blog post about it and how it works with Dapper here. Again, it’s a one-file drop in that works with Dapper. It gives you a nice wrapper that looks like this:

    public class MattsDatabase : Database<MattsDatabase>
    {
        [Table(Name = "tblproducts")]
        public Table<Product> Products { get; set; }
    }

 

So, using this, I can now rewrite that product update to something that looks like this:

	var o = myDatabase.Products.Query("Select * from Products where Id = 1");
	o.Name = "Updated";
	myDatabase.Products.Update(o);

 

Much nicer IMHO. This simple wrapper is taking care of a lot of the mundanity of building up the update SQL. It wasn’t as simple as just using the Dapper.Rainbows file however - as Sam’s blog post says, it’s very opinionated - for example it assumes that all PKs are int and called “id”, and that you’re using SQL Server. None of these were true for us, we have mysql and some funky rules about where the ID values come from. So we created a heavily modified version of the rainbows file that works for our needs. We’ve got all our db rules nicely encapsulated in there and that’s quite nice.

DAL-tastic

The only missing piece of the picture was making sure that the DAL logic was nicely encapsulated in it’s own “layer” - After discussion we decided that we don’t want the dapper class to be referenced directly from the MVC controllers because then we’ve got a dependency on the database that is going to make testing difficult (We did toy with the idea of just going with that and having a test db, and might actaully come back to that, but for now getting a test db set up is tricky for us). We had 2 options here:

1) Make our Dapper Rainbows wrapper class implement an interface (IAppDatabase or similar), and work against that interface in the controller. If we did this, then the tests could get a bit unwieldly when we’re mocking out the databse.

2) Create a simple DAL class. For example a ProductsService DAL class, that implements IProductsService, that has methods like “GetProduct, Update, Insert” and the implementation of this uses the dapper wrapper. We went with #2. Yes, I know, it does look rather like a repository pattern doesn’t it :) And yes, I accept that it’s adding a lot of DAL noise to the projcet. It’s something we’re reviewing as we go along, and we might very well change our mind as we use this.

0 Comments

Windows has an awesome set of command line tools, doesn’t it? I mean, you can, erm, umm, run batch files, and erm, that’s about it really. Ok, so it sucks. it’s the worst thing about windows for a developer, a stinking pile of 20-year-old-at-least badger poop.

There are a lot of things you can do however to drag it out of the abyss into something that you wouldn’t be too embarrassed to show to a *nix developer. Scot Hanselman has posted a couple of things about it like this and this older one

Anyway, since I re-installed windows (again) on my laptop, I thought I’d show off my current command prompt setup. I’ll probably do a lot more to it, but for now this is working really well for me…

Introducing…

dependencies

So this gives me things like:

  • Tabbed command prompts
  • Copy/Paste that works like a bash shell on *nix
  • Colour output
  • Proper git integration, (notice the branch name in the prompt)
  • Persistent command history
  • Unix commands (ls, grep, sed, etc) in addition to the normal windows ones

It’s pretty easy to set up, here’s what I have:

Console2

Note: Looks like there are some interesting alternatives to Console2 that Scott talks about - ConEnu looks especially promising…

Console2 is what gives you tabbed windows, sorts out select/copy/paste, and a generally much nicer command prompt experience. It just wraps your command line tool (like cmd.exe), which takes me on to…

PyCmd

PyCmd is a replacement command line extension for cmd.exe. It wraps it, and adds in lots of lovely features, like the persistent command history between sessions, and proper tab-completion. It’s a lovely little tool and worth playing around with. It also did that git branch highlighting you see in the screenshot.

You can tell console2 to use PyCmd as the command line tool and they both work together to give you a better experience all round.

Git

I just install the standard windows git installer (msysgit), and when it installs and asks where I want to put the *nix tools, I just tell it to merge them with the windows tools (basically it’s the option with the red warning text!). Personally, I like this option because it means I always have access to the *nix tools it installs without having to load a different shell.

That’s it! I’m planning on looking at SSH tools next.

Happy command-lining !

0 Comments

A couple of years ago, if I were making a new page for a web app and wanted to stick some javascript on the page (nothing fancy, let’s say a bunch of functions, some jquery plumbing in etc..) I would have just done something like:

var thing = 1;

function doSomething() {
	// Do something exciting...
}

$(document).ready(function() {

	// Here comes all my jquery event plumbing
	$('#thing').click(....);
	...

});

Which is, well, OK. It’s not great though, for a number of reasons, the main one being it pollutes the global namespace. What this means is that all those variables and functions are declared in the global scope. It means you can easily break things by writing code that changes your objects - perhaps you include another .JS file that also has a variable called thing in it (let’s hope not eh!) - you can see the pain!

Aside from this, it’s also bad because it doesn’t encourage well written code - functions are just thrown in there, there’s no grouping of code into classes or modules or whatever would make sense.

Theses days I’m trying to structure my JS code in a way that avoids all these issues, and currently my default solution is to use the module pattern, creating an immediately invoked function that wraps the functions into a closure, and provides private/public scope. It looks like this:

var ThingPage = (function() {

	// Private scope..

	var thing = 1;

	function doSomething() {
		// Do something exciting...
	};

	return {
		init: function() {
			// Do page init stuff
		}

		doSomethingElse: function() {
			// Do something
		}
	};
}());

$(document).ready(function() {
	ThingPage.init();
});

Theres a nice write up of this pattern here.

I think this is a bit cleaner. I’m only putting ThingPage into the global scope, and I can decide what methods I want to expose, making it easier to work with and maintain. There’s loads of rooms for improvement I’m sure - one of the biggest issue I have with this is that I can’t really unit test it very easily - my JS logic is mixed up with all my jquery code.. Something like angular or backbone might help with that but it seems an extreme next step, especially if there is just a modest amount of JS code on the page.

0 Comments

I was lucky enough to get a shiny new mac book pro at my new job - one of the new 2013 mac book pro retinas. Setting it up for (mostly) .NET development has been a bit of a task, so I thought I’d share my experiences here :) The challenge:

Setting up an optimal environment for .net development environment on a retina mac, with a “normal” external monitor attached.

The laptop is a lovely thing, and the retina display is very nice indeed. In OSX everything looks super-crisp and everything runs as fast as you’d expect. When it comes to running windows on a mac you’ve got 2 options. You either go down the road of virtualisation and use something like Parallels or VMWare fusion or VirtualBox on a budget, or you use bootcamp to install windows natively and boot into it (you can actually do both, and create a bootcamp partition and use it within osx as a VM - most of the virtualisation tools support this now). Both virtualisation and native have their ups and downs, the biggest issue for me was resolution issues…

Option 1 - Native windows via Bootcamp.

Ok, so you opt for windows natively. This is nice and fast, and arguably the best way to do .net dev when you’re going to be using VS.NET 2012, ReSharper etc.. You just need to sort the display out. “Retina” basically just means “huge resolution”. So in windows, because you’re running a huge resolution, you need to address the fact that nothing is readable because it’s soo small. The easiest way to do this is to set the DPI in windows to something like 150 - 200%. This works ok, most apps behave with a larger DPI but there are some (including some MS apps), that don’t look so good and don’t obey the DPI setting. Generally though, so this is great…. untill you throw in an external monitor. What happens is the DPI setting is applied to the second monitor which is running at a normal resolution. Cue enormous text and icons, and an unusable external display :(

To fix this, well there is no fix. The best I’ve been able to do is to “scale” the resolution on the retina display - I set the DPI back to 100% and on the laptop set the resolution at a non-native much lower setting. This works OK - I can use the external display now AND the laptop one together, but it does look a bit blurry / rubbish on the laptop. Which is a shame considering it’s a super nice retina.

Theres one more problem - buggy bootcamp. For me, running windows 8, everytime that I boot into windows my bootcamp control panel loses the settings and I have to re-apply them (to turn on 2-finger right mouse click for eg). Also, there is no 3-finger selection of text support, which is slightly naff. Maybe re-installing the bootcamp drivers will help here..

UPDATE: I fixed this by re-installing the bootcamp drivers.

Apart from the bootcamp bugs, and the slightly fuzzy retina screen, all is well. Performance is impressive, and everything works well.

Option 2 - Windows in a VM

This is the other option. Most of the modern virtualisation tools (I’m using parallels) allow you to run your bootcamp partition as a native partition, so I gave that a whirl.

I was surprised how well paralells handles running my windows partition. I gave it 4 gig and half the video ram, and it runs quite nicely. There is a small lag noticable compared to bootcamp, I think the video card could be the main culprit there though. Building in .net is fast too.

The external display problem is kind of fixed - depending how you want to run… I tend to run windows full-screen on the external display, and osx on the laptop display. This gives me nice retina osx AND windows running at a normal dpi on the external display, so no blurry screens. If you want windows on both screens however you still have exactly the same issues as bootcamp.

So, the bootcamp niggles are gone, and the display sorting niggles are gone, but there are new niggles to replace the old niggles ;) The niggles that bother me in paralells are not performance but just getting the keyboard and mouse setup. The keyboard shortcuts can tend to clash a bit (osx shortcuts happen when you expected windows shortcuts), causing confusion when you’re in visual studio expecting to bring up a resharper menu and instead osx does something funky! You can configure all this in paralells, I’ve just not yet found a good mix that stays out of the way enough. Also using a mouse can be tricky.. I’m not a fan of the osx mouse movement, it’s horrible compared to windows (I think its the acceleration algorithm). So you’re stuck with the osx mouse movement, and on top of that windows in a VM just doesn’t handle the mouse as well as native.

Still, you do get to use both windows and OSX in this setup, which can be very nice.

UPDATE - Windows 8.1 Might fix all the bootcamp resolution issues

So, it looks like windows 8.1 might well fix all the above resolution related issues. This is awesome news, I hope it works !