This is my personal blog. The views expressed on these pages are mine alone and not those of my employer.

Tuesday, 6 December 2011

Not the usual 'breakpoint will not be hit' solution

I know the woes of  the 'The breakpoint will not be hit' problem as I've struggled with it personally for years.  After days of searching through the usual answers of check/delete your .pdb files I've found that it's often much simpler than that:

If you've got more than one instance of Internet Explorer running when trying to debug close them all and try to debug again.

The amount of times Visual Studio has attached itself to the wrong instance is both amazing and inconsistent.  This simple trick has saved hours of both mine, and colleagues time.

Saturday, 8 October 2011

Who tests the unit tests?

With unit tests having fast become an accepted method of automating and reducing test time the question has to be asked - how are unit tests themselves tested?

A good developer can always be described as lazy because it's good developers who understand a little effort  invested up front will pay off plenty in the future. It's this philosophy that has spread the wide adoption of unit-testing and specifically test driven development (TDD).

The problem is - like all code - unit tests can themselves contain bugs.  A prime example in the .net framework:

[TestMethod]
        public void TestMethod1()
        {
            //
            // TODO: I'm not doing anything!
            //
        }

Even though this test clearly doesn't test anything, when executed it is a pass!

There are methods out there of how to craft a good unit test which focuses on the properties and structure of a quality test, and of-course standard re-factoring and quality control factors still apply, and here is the problem..

How can a unit test be completely trusted - when they are code themselves?  Are unit tests simply "passing the buck" of responsibility?

Since unit tests are produced by developers. How can a developer absolutely know that a unit test accurately represents a requirement? Do we test it? How? If so who tests the unit tests?

Thursday, 14 July 2011

Leaving router on at all times, good idea?

Having recently subscribed to a new broadband service and sent the token 'free' wireless router. I was posed with the following question; should I leave the router powered on the whole time?

This particular broadband provider was nice enough to include the answer within their FAQ, and it is not at all what I expected  [copied directly from their FAQ]:

FAQ: Should I leave my router switched on all the time?
Yes. Leaving your router on will make sure you get the best speed and performance from your service.
Don't switch it off at night! Regularly switching off your router can make it look like your service is disconnecting. If this happens, your broadband speed will be reduced because the exchange thinks your line is unstable and can't cope with higher speeds.

Aside from any technical reasons, I can think of a plethora of reasons why this might in fact be a bad idea:

- Security
Physical security is the best kind of security, preventing attackers getting their grubby mits anywhere near the bottom rung of the 7 layer OSI model. Physical security for a wireless, omnidirectional service can only be achieved through removing the power, stopping any would-be intruders in their tracks.

Leaving it on constantly however massively increases the attack area, which can be used to attempt to gain access.  Once access is gained any kind of activities can take place, which could land the fee-paying subscriber in plenty of trouble such as; sending spam, stealing passwords, masquerading and cookie sniffing among others.

- Wasting power
Of course anything that has electricity running through it whilst not being used can be deemed a waste, hence minimising any such time is preferable.

- Fire risk
By leaving the router on all the time you are increasing the likelihood of a short occurring or other such malfunction which could be attributed as a fire risk.


Which, in addition to the other two points means I will be isolating my router for the foreseeable future and to be honest, who would want to risk those points for the sake of a possible yet temporary drop in speed?

Monday, 7 March 2011

The key to another world

Having recently stumbled upon the World of Warcraft authenticator device I became intrigued at exactly how these work.  For those of you who haven't seen one:



The authenticator provides a special code upon a pressing a button which the user needs when logging into their WoW account.  The device itself is completely standalone, the code changes every few seconds and you cannot log-in without one. Surely this has got to be kind of magic?  Well, no - its pretty simple really...

The devices are manufactured with a serial number known to the device that is hard coded. In addition to this it also contains a real time clock which is set during manufacture.  Hence the device has two pieces of information at its disposal; the hard coded serial no, and the current time.  Note how there's no connection whatsoever to anything else.

The serial number printed on the back of the Authenticator

When you first receive your device you must "synchronise" your online account with the serial no (which is reproduced on a sticker on the back).  Since time is universal and the serial no never changes, the WoW server and the device both have access to the same two variables.

The server and authenticator both generate a code based on the current time and serial number. These are then concatenated together to come up with one long sequence of numbers, like so:
Sum of know values = [Current Time] + [Authenticator Serial No] 
which could be: 
Sum of known values = 12:37 + 1412668222
so you would end up with a number like 12371412668222 which both the authenticator and the server could generate given a specific time.

The problem with this is that this number can be captured by anyone through a variety of methods: looking at your screen, installing a keylogger, phishing, or any number of other attack.

To mitigate this the number is encrypted  using either DES, 3DES or AES as supported by the device which will turn it into something meaningless, such as: 63634545.

Because both devices (the server and authenticator) generate these numbers separately, at the same time, and encrypt using the same algorithm, both will calculate the same result. This makes full use of the powerful combination of something you know - your username and password, which something you have - your authenticator.  A step similarly introduced into google accounts 2-step verification.

To mitigate the user taking a while to submit their code it's likely the WoW server will accept a range of numbers, from a couple of minutes prior to the actual time.

This provides the following advantages:
  • The serial number is never sent over the wire in plain text so cannot be captured
  • The key changes with the time and whether it has been used, so being captured over the wire is useless as it must be used straight away and cannot be used again
  • Unless the attacker can collect a lot of keys - the serial no cannot be reverse engineered
I hope I've explained how these devices work, and shown how it's possible to manufacture these devices for pennies while thwarting a range of attack vectors.

Some sources used for this article:

Saturday, 19 February 2011

Keeping it simple - stupid

Einstein once said "Everything should be made as simple as possible, but not simpler" which couldn't apply any more to physics as it does modern software development.

You see - modern software development is brimming full of abstractions - which exist solely to provide simplification through hiding complexity.

These abstractions tend to come along with 'buzz word' names, here's just a handful:

- Web Services
- ASP.NET Web Forms
- User Controls

These are all 'cool' words to find on anyones CV, and asking a software developer about each would result in volumes spoken.  However at the end of the day all of these boil down to abstracting the underlying messages being passed across a network.  Imagine having to piece together each HTTP packet, adding the headers, and populating its payload - it would take an age to build anything, never mind a decent website, user control or web service - which is the problem abstraction solves.

However abstractions are very limited in their scope, and it's too easy for the non-experienced to try and abstract too much.  Abstractions keep things simple, but applied too wide they make things simpler.  And we know good old Einstein wouldn't be impressed with that.

To illustrate this point I'd like to describe something I recently experienced.  A requirement of ours was to provide nightly backups across the internet, between two separate systems. To achieve this the following would take place:
  • The backup would be uploaded over FTP
  • A message would be sent to the recipient of the backup when complete (to allow it to process the backup)
The developer in charge decided the message after the upload was complete should be sent via a method call to a web service, which would also pass the hash of the uploaded file so it could be verified.  There was a number of problems getting this set up, and while the FTP side of things worked perfectly, the web service didn't.

The point I'm making is that a web service is an abstraction over HTTP, and is complete overkill for a simple method call.  A much better (and simpler) way would be to create a page who's URL allowed the hash to be passed in as a variable.  Which would in turn carry out the same actions as the web service.

Doing it this way allows a number of benefits over the web service option:
  • Any machine connected to the internet could send the 'completed' message as there is no reliance on the .net framework
  • A lot of overhead has been removed
  • There no need to install, debug, and mess around with web service way of doing things
The developer involved had the notion that using a web service is some how more secure, which just sounds downright dangerous to me.  If you've got an open web service, anyone can play around with it, but that's because I understand that its just abstracting a bunch of HTTP calls.

I therefore urge all software developers out there to think before you type anything, remember this is what a specification is for!  Are you trying to abstract something which is already simple?  You wouldn't nail a picture to the wall with a sledgehammer, would you?

Tuesday, 8 February 2011

ASP.NET ComboBox

Want a ComboBox for your ASP.NET application?  One that works? Cross browser? Well you're in luck.

A ComboBox is a control that displays a TextBox combined with a ListBox - which enables the user to either select an item from the list or type their own.  They have many uses - as I'm sure most of you are aware.

I've always had a hard time trying to understand why Microsoft haven't provided a decent implementation of a ComboBox for ASP.NET applications.  They have tried - and its included in the AJAX Control Toolkit - but its pretty darn useless if you ask me.

Don't get me wrong, there are plenty of permutations of ComboBox out there and as the adage goes "great programmers don't write what can be stolen", but the best ones aren't free and I just couldn't justify paying for something that should be.  So I got to work and made my own:




Pretty nice isn't it?

Features of the control:
  • Nice and smooth sliding action
  • Its lightweight and simple
  • Renders in all browsers
It uses jQuery - so you'll have to include a script tag wherever you use it, I'm using this one kindly hosted at google:

<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js">

Overall it took me around 2 simple hours to create, so why after 9 years since the ASP.NET framework has been released can Microsoft not provide a decent clone of the Windows ComboBox?

By all means my version is no way finished - it was created it to make a point.  However by providing the source on here I've hopefully filled a gap in the ASP.NET control selection.  I know that by storing it on here I should hopefully never lose it, and also allow others to use as they wish.

A couple of things you may want to implement into this control:

  • Remove the unnecessary postback when selecting an element using Javascript
  • Allow the height of the listbox to change dynamically depending on its contents
  • Add a load more methods (perhaps to bring it in line with the AJAX Control Toolkit)
  • Ajaxify it - what isn't these days?

I'm providing it as a Visual Studio 2008 solution so you can fire it up straight away, however it's trivial to open in any version.  You can grab it here.  I must say I'm quite proud of it.

All in all its a simple control - but it works perfectly for what I need.  Please download the source and modify as you wish, however I'd be grateful if you cited me as the source. Perhaps even Microsoft will pick up on it (I can wish).

Saturday, 22 January 2011

The often underrated value of Googling skills

I recently stumbled upon this question over at Stack Overflow:
can anyone help me find out url from given paragraph using regex c#
From reading this you can learn a few of things about the asker:
  • They have *some* software knowledge as they know they want to use regular expressions (regex)
  • They didn't use punctuation
  • They don't know how to use Google
Programming is is such a large area that is constantly undergoing change that it simply isn't possible to know it all, and anyone who tells you that is likely already light-years behind.

The internet however does know all of the answers, and any programmer at any level should know about and exploit this excellent resource. Of course you know this - because you're doing it right now.  In fact it has been said that software developers are simply Googling machines.

Whats at the centre of the internet?  Its Google of course! Google is a tool - period, and any one whos been tasked with writing even a single line of code should know how to use it and use it to its full potential. And since you're reading this that likely means you.

So why arn't educational establishments teaching Googling skills to computer science students?  I graduated in 2008 and certainly didn't get any formal guidance.

Which brings me back to the question above.  The asker would have got an answer instantly if they simply knew (and had the motivation) to do a search.  Perhaps if all software related education taught the use Google that above question would have never been asked.  Of course the asker could have simply been lazy - in which case shouldn't be developing software at all...

Monday, 17 January 2011

Assumptions in software

Its no secret that I prefer beautiful code.  I prefer code that reads like a set of comments - which provide a running commentary of the execution path with zero ambiguity and certain clarity that makes its intent crystal clear.

This makes good sense when you realise code is the only medium which allows us to communicate with machines - it allows us to speak the same language.  And just as in English, if you do not directly specify each and every possible fact - assumptions get made.

Now if an assumption is not was as intended strange results may occur - some may even say exceptional results. Bill Gates famously assumed in 1981 "640K ought to be enough for anybody" and we all know how wrong that was.

Exceptional circumstances are dealt with by most modern programming languages (in particular .NET) with - you guessed it - Exceptions. These set of classes are the equivalent of the software going crazy and refusing to carry on doing anything until the assumption is resolved - this is good, as we know that assumptions at best cause ambiguity and at worst are a show-stopper.

My beef is with a certain construct  that goes by the name "Else".  The problem is it encourages assumptions to be made but the main problem with this is computers don't have intuition.  Imagine the following scenario:

A school want to carry out a survey of the average age of students and would like to use some software to take input and perform the calculation.  I know if this was paper-based and an age came back as 135 my intuition would tell me this cannot be true - so I would write the software to also discard such a condition.

Once done the code looks like this:

private void addAge(int userInputtedAge)
        {
            if (userInputtedAge < 100)
            {
                //They've entered a valid range of ages
                calculateMean();
            }
            else
            {
                //The age wasn't less than 100
                MessageBox.Show("Please enter a valid age");
            }
        }

As you can see here using the else construct you have potentially introduced a bug if the user entered a perfectly valid int value of -10.

If I had re-written this software without using the else construct I would be forced to manually check for each expected condition:

private void addAge(int userInputtedAge)
        {
            if (userInputtedAge > 100)
            {
                //Bigger than 100
                throw new ArgumentOutOfRangeException();
            }
            if (userInputtedAge < 0)
            {
                //Less than zero
                throw new ArgumentOutOfRangeException();
            }

            calculateMean();
        }

Writing without the else has forced me to stop being lazy - and in the process I removed a potentially serious bug.

"So what?" you might be thinking.  The fact of the matter is any assumptions included in your code have the scope of being manipulated and abused.  Here is a real life example I found.

My conclusion is simple - if I'm expecting a particular range of inputs I should specifically test for each one.  If an input is not what is expected it is therefore exceptional and hence the software should throw a tantrum rather than carry on regardless.  Software should be designed in a way that causes bugs to show themselves - not hide away quietly.

So please - next time you think of writing Else stop and think "Do I really mean anything else?"

Monday, 3 January 2011

Are programmers any good without Google?

Having recently read that apparently 199 out of 200 of applicants for programming jobs can't write code at all I thought I'd put myself to the test.

The crux of the post being the FizzBuzz problem.

My results were rather surprising, although I managed to code a solution within 10 minutes - I had to perform the following google searches:
  • c# if whole number
  • c# multiples
and finally ending up with asking a rather dumb question over at stackoverflow.

This strikes fear through me, as it has become apparent that I am one of those who have a degree but can fail when asked to carry out a basic programming task during interview.

I've often wondered about this very fundamental understanding of programming - are programmers reduced to being simple googling machines?  Are our jobs simply comprised of looking up others code - and making it work for you?

I'm not so sure about the value of coding during interviews, as they tend to want you to code solo - without help.  But when in the real world are you working on a computer without internet access? Now obviously I disagree with someone using the internet for hours whilst trying to solve a trivial problem, but looking up syntax or framework quirks - surely thats excusable?

There are thousands of aspects to the .NET Framework and I challenge anyone who says they "know" it all. A programmer not needing the internet I believe is the equivalent of saying a Christian doesn't need a Bible because they "know" it all.

So what are the general thoughts on this?