This is my personal blog. The views expressed on these pages are mine alone and not those of my employer.

Thursday, 27 March 2014

Reliance on implementation details

Recently I stumbled across an issue in a legacy vb.net app which didn't appear to make any sense.  The issue involved determining the precision of a Decimal which was giving different results for exactly the same value.

First of all I wrote a quick test to attempt to replicate the problem, which appeared to happen for 0.01:


This passed, then I'd noticed in a particular method call the signature was expecting a Decimal, but was instead being supplied a Float (yes option strict was off [1]), meaning the Float was being implicitly converted. Quickly writing a test incorporating the conversion:


Causes the issue:


It seems to think 0.01 is to 3 decimal places!

So what's going on here? How can a conversion affect the result of Precison()? Looking at the implementation I could see it was relying on the individual bits the Decimal is made up from, using Decimal.GetBits() to access them:


The result of Decimal.GetBits() is a 4 element array, of which the first 3 elements represent the bits that go to make up the value of Decimal.  However this method relies only on the fourth set of bits - which represents the exponent. In the first test the decimal value was 1 with exponent 131072, the failed test had 10 and 196608.

When converting to binary we see the difference more clearly, I've named them bitsSingle for the failed test and bitsDecimal for the passing test:


As you can see the exponent for bitsSingle is 3 (00000011) whereas the exponent for bitsDecimal is 2 (00000010), which represent negative powers of 10.

Looking back at the original numbers we can see how these both accurately represent 0.01:

bitsSingle has a value of 10, with an exponent of -3 = 10 -3
bitsDecimal has a value of 1, with an exponent of -2 = 10 -2

As you can see Decimal can represent the same value even though the underlying data differs. Precision() is only relying on the exponent and ignoring the value, meaning it's not taking into account the full picture.

But why is the conversion storing this number differently than when instantiated directly?  It just so happens that creating a new Decimal (which uses the Decimal constructor) uses a slightly different logic than that of the cast. So even though the number is correct, the underlying data is slightly different.

This brings us to the point of the article.  The big picture here is to remember that you should never rely on implementation details, rather only what can be accessed through defined interfaces.  Whether that be a webservice, reflection on a class, or peeking into the individual bits of a datatype.  Implementation details can not only change, but in the world of software - are expected to.

If you want to play around with the examples above I've uploaded them to GitHub.

[1]I know it's not okay and there isn't a single reason for this, however as usual with a legacy app we simply don't have the time / money to explicitly convert every single type in a 20,000 + loc project.

8 comments:

  1. I tried this scenario on double and the correct precision of 2 is returned. So does the issue only happen when converting float to decimal?

    ReplyDelete
    Replies
    1. The reason is because casting from a float and casting from a double are separate tasks. This is clear when you view the source of decimal.cs and notice there are separate constructors for each cast. Unfortunately the implementation of these conversions are implemented externally - so we can't be sure of exactly how this is being done (although this might point you in the right direction ). Therefore the answer to your question is because the internals are different, hence you can't rely on them as Precision() does above.

      Does this make sense?

      Delete
    2. I've updated the example with your example showing that it works.

      Delete
  2. The real question is how did you *correctly* end up getting the decimal places then? All of the ToString() solutions I found have the same problem as this with float conversions.

    ReplyDelete
    Replies
    1. Unfortunately a solution was never found. Instead we've become more careful over type checking / conversions and this hasn't happened since. Would love to find an answer to this though.

      Delete
  3. This is all very well, so long as the framework publishes an appropriate method.
    For example Decimal d = 1.234M
    The decimal knows it has 3 decimal places.
    But I can see no published method/property that returns this information
    Having to use GetBits is ugly but unavoidable in this case

    ReplyDelete
  4. Is valid for float and double ?

    var d10 = 54321.98M;

    var f10 = 54321.98f;

    var double10 = 54321.98;

    ReplyDelete
    Replies
    1. The problem is with the conversion of float to decimal, which for 0.01f returns 0.010d.
      So the bits gives the correct value for the input. But the input is wrong.
      This is because float isn't precise. So the conversion can be a bit wonky as a result.

      Delete