Saturday 27 October 2018

C# Exception Handling: Catching, Throwing, and Performance


  • Use `throw;` not `throw ex` 
  • Use `catch(...)` as few times as possible 
  • Reserve Exceptions for Exceptional conditions: 
    • 1 in 100 is not exceptional 
    • 1 in 1,000 probably isn't exceptional either

The Stack Trace

The Stack Trace is extremely useful for debugging errors, and can significantly shorten the cycle time of your work. There is however a common mistake that is made across many codebases that makes it less useful than it could be.

Let's start with an example...

A New Support Request

Your support team has identified an issue in production, they've diligently compiled:
  • The user affected
  • The time it occurred
  • A bunch of other useful information 
  • The actual error message 
  • The Stack Trace
Your stack trace looks like this:
System.Exception: I wasn't able to complete that request!
  at <Library>.<Service>.DoWork() in /Library/Service.cs:line21
  at <Library>.<Controller>.UserRequestedAction() in /Library/Controller.cs:line 21
You feel confident you can fix this quickly, after all, you have the actual error message and stack trace! Then you open up the code at the top of the stack trace - Service.cs:
    public void DoWork()
            var service = new ExceptionThrowingService();
        catch (Exception e)
            // Perform special handling, logging, etc.            
            throw e; // Line 21!        

Uh oh, that isn't where the actual error message occurred!

In the sample code we can probably deduce that the error occurred in either the constructor of the service, or the call to ErroneousMethod(). You may have been more fortunate than I have, but rarely does the code I inherit look as simple as this. A Try/Catch block like this might have as little as 5 lines, but is often somewhere between 50-200 lines, and unfortunately, sometimes 1,000 lines or more.

In addition, the dependencies that are utilised (such as ExceptionThrowingService in this instance) mean that the actual error could have occurred somewhere within one of those dependencies, and therefore the number of lines of code to search for the error is significantly larger than just the lines within this Try/Catch block.

Maintaining the Stack Trace

Fortunately, maintaining the stack trace is easy. Replace throw e; with throw;. Simply omitting the Exception object will result in .Net not touching the stack trace.

Stack Trace Recommendation

Ensure that your default coding style is to use  throw; without including the Exception object. You can then selectively choose when you want to rewrite the Stack Trace.

It should be extremely rare that you want to overwrite the Stack Trace on the Exception object that you caught. If you are going to overwrite the Stack Trace, then create a new Exception. You may or may not want to include the caught exception as the Inner Exception depending upon your context.


While Exceptions are a powerful tool, they come at a cost, one of these costs is execution time. If you throw one Exception, you are unlikely to notice the cost, but if you throw thousands, or millions, then you are most likely paying a performance penalty for your choices.

Personal Anecdote: I have improved performance from hours to a couple of minutes by changing control flow from Exception throwing to alternative control flow such as return values. I have experienced this saving on multiple occasions.

The Stats

Using a skeleton implementation we can compare the execution time of a loop with exceptions vs a loop that doesn't throw exceptions. On a reasonably powerful modern laptop the timings come out at:
IterationsWith Exceptions (ms)Without Exceptions (ms)
1003< 1
1,00025< 1
10,000282< 1

This aligns with my personal experience, loops that perform calculations and/or validations across a large number of objects/records can often improve performance significantly be decreasing reliance on Exceptions.

These results are only going to be indicative when then execution time of the "work" being performed for each item in the loop is limited. This is often true when there is no reliance on a data store or network dependent services inside of the loop.

Performance Recommendation

Exceptions inflict a significant performance penalty whenever they are thrown. Exceptions should be reserved for exceptional circumstances. If 50% of the calls to a method result in an Exception being thrown, then this is not an exceptional situation, this is business as usual.

I appreciate that if the data is invalid, then from a theoretical point of view, it may be correct that an exception be thrown. However, if you want the services that you write to scale and deliver responsive user experiences, then consider the performance impact of all Exceptions that you throw and use them sparingly.

Example Code

Please see the example on BitBucket for a minimalist demonstration of these concepts.


Saturday 21 July 2018

Why I Stopped Seeking Acknowledgment

One of the biggest improvements I made as a leader (and probably a work colleague!) was to stop seeking acknowledgment from others. I used to be one of those annoying people who would call out a shortcoming (or opportunity to improve) and talk about it with someone until they acknowledged that the issue existed.

We don't like admitting when we are wrong, I'm sure you can think of your own examples, be it with children abdicating any involvement, be it car accidents and traffic infringements, or at work when mistakes have been made.

As a result, these interactions were very frustrating for me, and I'm sure that they led to resentment towards me by these individuals.

I'd love to tell you an awesome story about how I reflected on these interactions and intentionally experimented with different ways to make these conversations more effective, but that wouldn't be true.

The real story is that I was having a conversation with a member of my team and something came up that wasn't working. I forget the topic now, but when it came up the reaction from the other person was very defensive. We were interrupted before I could delve into the excuses and blame that the person expressed (of which I thought there was plenty!).

To my surprise, the next week I observed the person changing the way that they worked, and I had the change that I had been seeking!

This was the moment that caused me to reflect and question the way I had been behaving and what my real drivers were.

It is quite uncomfortable (but probably not that surprising) to acknowledge that even though I saw the change that I wanted occurring, I still felt that something was missing because there had been no acknowledgment. My instinct was to bring it up with the person in our next catch up, and it was very hard to fight that strong desire.

This was a turning point for me, and I intentionally changed the way I offered feedback so that I fought against the urge to seek acknowledgment that an issue exists. This has led to a significant increase in the adoption of behavioural and technical suggestions by those I manage, mentor, and/or coach.

Even though it has been many years, and the desire to push for acknowledgement is long gone, it still feels nice when acknowledgement occurs. Maybe I am not fully cured after all!

I have since applied the same approach to organisational and product improvement suggestions, but I'll write about that in a future post.

Agile and the Spotify Yardstick

I was lucky enough to attend LAST Conference 2018 last week and it reminded me to finish my reflections on a session I attended 3 years ago at LAST Conference 2015!

At the time, one session in particular made me really stop and think about 'agile' and how organisations and teams just starting on their journey can so easily get lost in information overload. I believe this is just as true today as it was 3 years ago.

The session was @muir_maria's session "It's okay to be Hybrid" that started it all. @muir_maria very rightly pointed out that there is a spectrum (a very wide one) where at one end exists 'Waterfall' and the other 'Agile', and at the agile end, she has put Spotify as the yardstick. This represents what many in the industry believe (or seem to), that they aren't really agile until they have copied all the practices that Spotify are using.

Don't get me wrong, Spotify must be a great place to work, and they are extremely advanced in their agile practices, however, I'm not convinced that the Spotify model is the right model for every other organisation on the planet, nor do I believe that the Spotify model is perfect (their marketing machine has to be commended for the impact that this has had, I'm sure that their subscriber base is significantly higher because of it).

I believe that the key to their success is that they are continually identifying possible ways to improve, being brave enough to have a go at these new ways, and keeping the things that work while discarding the things that don't. If you rinse and repeat that enough times, you are going to be in a great place. They are doing great things, but if another company in another industry was that aggressive with continuous improvement, would they end up with the same model? Possibly. But I'm tipping they would end up with a different model, one that was appropriate for the problem domain that they are solving.

There are a number of problems with continually measuring ourselves against the Spotify, or Netflix, or Atlassian, or any other specific organisation's Model. The first is that it stifles innovation, we have all these organisations and intelligent people who are so focused on whether they are as good as 'Org X' that they try to apply aspects of the 'Org X' model to their business instead of identifying an improvement that is based around their problem domain (the team, the business model, the regulatory environment, etc.).

The bigger problem is that for organisations new to agile (and even a number of organisations who have been practicing agile for a while), they think that the only way they can be agile is to do all the things that 'Org X' does. That isn't going to be achievable, as it is a huge mountain to climb, and I'm not surprised that a lot of organisations aren't signing up to drop everything they know to do what 'Org X' does.

I'm not advocating that we stop observing and copying practices from successful teams, that definitely needs to continue, I'm proposing that the culture and practices of regular reflection, resulting in experimentation and further reflection using the shortest possible cycle time (weeks and days, not months) is the more important goal to be chasing.

Monday 10 November 2014

Health Hack 2014

I participated in the inaugural HealthHack in 2013 (my thoughts on that are here), and I was so impressed with the event that I signed up again this year. The format was very similar to last year's event, except that it was simultaneously held in both Melbourne and Sydney.

Friday Night

I (and many other volunteers) listened to the researchers pitch their ideas. My first impressions of the problems and their descriptions was less than stellar. I'm embarrassed to admit, I seriously considered calling it quits on Friday night and not joining a team. My views ranged from "there is no way I can add value to that team", to "what are they even trying to solve?", I was tired, and possibly a little intimidated by the amount of energy that everyone else had. But I was there, and I had been really looking forward to the event, so I went around the room listening to the researchers and asking questions that could give me a better understanding of how solving their problem would help the field of medical research, and whether or not my skills and knowledge would be useful.

In the end, I found @MVEG001's problem around the transparency of NHMRC funding to be the one that I could provide the most value on, and I placed my name on the team list and went home for the night with some links to the data sources that our team would most likely be basing our solution(s) upon.


We had a reasonably late start on Saturday, with the team coming together a little after 10. By this time, many of the other teams were already flat out coding, or had some very detailed flow diagrams that they were having animated discussions about.

Shortly after gathering together for the first time as a team, it was announced that the midway showcase would be at 4:30pm. Considering we had only just met each other, I was a little anxious about what we would have ready by that time!

We spent all of the morning getting a handle on all the data that was available, then gathered together to dream about the data that we wish we had, and finally culled all the options back to those pieces of data that were relevant and achievable in the timeframe that we had. @IrithWilliams did a great job of keeping the group creative and focused, and also ensured that we were going to implement a solution that would solve the most important problems for @MVEG001.

It was a little after 2pm when we started on the implementation. @sritchie73 had the first cut of the data cleansing done in about an hour, which was quite impressive, and this allowed the rest of us to start plugging real data into our visualisations.

By the time the showcase came around, we had a solid understanding of the problem we were trying to solve, and how we were going to solve it. 

I ended up leaving at about 9pm, and when I got home I got a little carried away and worked on the backend processing for my visualisation until 12am. 


We were all working hard on our visualisations for most of Sunday morning. When lunch time came around, we provided feedback on the different visualisations, and put some finishing touches on them. Somewhere in the middle of that @kiwintessential managed to expand our cleaned data set by another 9 years.

The afternoon was spent putting all our solutions together into one location, setting a licence, choosing colour palettes, and generally tidying things up. In addition, @fredmichna organised us all well enough to produce a video for our showcase. Unfortunately, we had a lot of trouble getting the iPad to play nice with the AppleTV, and we had to do our showcase the old fashioned way.

The team showcases were impressive, every team had delivered a huge amount in that short timeframe, and from what I could tell, every researcher was extremely happy with what their team had helped them deliver. 

Closing Thoughts

Once again, @sauramaia and her army of helpers put together a great weekend. I left feeling that our team had made a difference and made @MVEG001's life that little bit better. I was also pretty exhausted, but in a good way.

For those that are interested, here is our team page, and here is the list of winners for Melbourne.

Monday 15 September 2014

The First Bad Day Without a Car

Last Friday night we had our first bad experience without a car. It wasn't that bad really, but what would have been a 30 minute trip home from my parents' house wound up taking nearly one and a half hours.

It all started with one late bus. Because it had been a big day, and the girls were tired, we decided to catch the bus the 1km to the train station to ease the load on their little legs. If the bus was on time, we would have had 5 minutes to spare at the train station until the next train rocked up. Unfortunately we missed the train by a couple of minutes, and had to wait 25 minutes for the next one.

This train left the station at just past 8pm, and we assumed that it was stopping all stations, unfortunately, we assumed wrong, and the train stopped at all stations until it arrived at Camberwell, at which point it ran express all the way to Richmond.

In the end, we ended up getting home at 9pm, after having left my parents house to wait for the bus at a little before 7:30pm.

We are not going to be put off by one bad experience, but this was one occasion where having a car would have been significantly more convenient than waiting around in the cold for public transport to link up effectively.

Thursday 31 July 2014

Living Without a Car

If you had have asked me 10 years ago if I could live without a car, I would have looked at you as if you were crazy. I grew up on a farm nearly 2 hours from Melbourne and having a car was a necessity for a social life. I also remember loving driving, but thinking back on it, I think I loved the freedom that driving gave me, not the driving itself.

Fast forward to today, and I live Hawthorn (in the inner suburbs of Melbourne), work in the CBD, am surrounded by excellent public transport, and using a car is more often than not slower than walking to my destination (as an example it once took me 40 minutes to drive 2km, and that particular stretch regularly takes 20 minutes to drive).

We (my wife, 2 young daughters aged 3 and 5, and I) primarily use our car for visiting people, but over the last year we have been replacing car trips with public transport in a lot of these cases. It might take a bit longer sometimes, but it means that we are more active as we need to walk a little more, and it affords us a much greater amount of quality time together.

When we are on public transport together we are able to observe the world around us, have conversations, and play games, all without "the driver" missing out. Given the driver was almost always me, I now have a little more time with my daughters than I did before, and that is a win I'm happy to have (hopefully my daughters see it that way too :-P).

It has been 4 weeks since we have used the car, and I don't miss it. The real test will be once it is sold (soon hopefully) and we no longer have that safety net. We won't totally deny ourselves the use of a car, we will rent a car when we need/want one (weekends away, mountain biking trips, etc.).

Although it isn't the primary reason for making the change, we do expect it to save us money given the estimated cost of owning a car is around $200 a week according to the RACV. We intend on tracking the costs associated with hiring cars and taking public transport we wouldn't otherwise utilise in an effort to measure the truth of this.

Our initial goal is to last 6 months without a car. But I'm quietly confident that we can go at least a couple of years without one. I'm looking forward to seeing how it all pans out.

Tuesday 29 July 2014

Why We Started With Chef

I like to think that we are reasonably mature with our release processes at TAL, we release to production regularly (every 2 - 4 weeks depending on the application), our code is built once and deployed many times, and it has been a while since we have had to roll back a production release.

All that said, it is still more painful than it should be, more could be automated, our outage windows could be reduced further, and our management of configuration items (web.config etc) isn't great. We are continually improving most of these, and I have no doubt that with the introduction of Octopus Deploy (coming soon) we'll get a nice jump ahead on these issues.

One thing that isn't done well though, and we haven't had an answer to for quite a while, is server configuration management. For base server config, patches, service packs etc. our services team do an excellent job at keeping things up to date and consistent across the fleet. But for things like enabling MSMQ, web roles, applying permissions, and changing system default configurations, we apply them manually to each VM we spin up and use.

This means that when we need a QA, Dev, or other environment, we spend at least a day setting up the VM and installing/configuration, attempting to run our app, realising something was missed, installing/configuring, repeat.

It also means that we don't have an inventory of what customisations are required for our servers before they are fit for our apps to run successfully on them. We have a wiki page which is manually updated (when people find issues AND remember to update it), but anything manual like that inevitably ends up out of sync eventually.

We did some proof of concept work a little while ago with both Chef and Puppet, and for us Chef came out in front. Our latest work has got us to the stage where we can get a workable VM set for one of our apps up and running in well under an hour. That is an awesome development, and we are super excited.

We do still have a few things to sort out. For starters we are guilty of most of these anti-patterns. We are trying to move away from these and follow The Berkshelf Way, utilising Test Kitchen, ChefSpec, etc. This has led to a long list of things to learn and try out including Vagrant and Packer. There is also a whole lot of frustration around getting all of this working with Windows as most of the doco is very Linux focused.

We are confident that we'll find the process that suits our team soon, and even if it isn't perfect, we are still way ahead of where we were, and incremental improvement is much better than no improvement, some people may even argue that incremental improvement is the best kind.