Posted by Sam on Feb 11, 2008 at 05:56 AM UTC - 5 hrs
When I posted about why it's important to test everything first
, Marc Esher
What do you find hard about TDD? When you're developing and you see yourself
not writing tests but jamming out code, what causes those moments for you?
And have you really, in all honesty, ever reaped significant benefits either in
productivity or quality from unit testing? Because there's a pretty large contingent
of folks who don't get much mileage out of TDD, and I can see where they're coming from.
My TDD Stumbling Blocks
I'll address the first bit in one word: viscosity. When it's easier to do the wrong thing
than the right thing, that's when I "see myself not writing tests but jamming out code."
But what causes the viscosity for me? Several things, really:
When I'm working with a new framework or technology and I don't know how to test it: I'm trying
to avoid this now by learning languages by unit testing.
However, it's still tough. I started writing tests in C# .NET recently, but moving things to
ASP.NET has made me stumble a bit. That's mostly because I didn't take the time to understand
how it all worked before I started using it, and now I'm in the process of rewriting that code before it becomes too
- UIs: I still don't understand how to test them effectively. I like Selenium for the web,
but most tests I write with it are brittle. Because of that, I write them flippantly. It's a
vicious cycle too: without learning what works, I won't get better at identifying strategies to
remove the viscosity, so I won't write the tests.
- Finally, and most of all: legacy code bases, with no tests and poor design. It's so much easier to hack a fix than it
is to refactor to something testable. I've yet to read Michael Feathers' Working Effectively
With Legacy Code, so that may help when I finally do. (You can find a twelve page PDF article
@ the Object Mentor Resources Website.)
At the minimum, it should help motivate me to follow the elephant more often.
That last one is a killer for me. When I'm working on new projects, it's incredibly easy to write
tests as I develop. So much so that I don't bother thinking about not
doing it. Unfortunately, most
of my work is not in new code bases.
I should also note that I often don't bother unit testing one-off throwaway scripts, but there
are times when I do.
On top of that, my unit tests rarely stay unit-sized. I generally just
let them turn into integration tests (stubbing objects as I need them when they are still
unit-sized). The only time I bother with mocks are if the integration piece is taking too long
to run tests.
For example, I might let the tests hit a testing database for a while, but as the tests get unbearable
to run, I'll write a different class to use that just returns some pre-done queries, or runs
all the logic except for
What about rewards?
In Code Complete 2
, Steve McConnell talks about why it's important to measure experiments when
tuning your code:
Experience doesn't help much with optimization either. A person's experience might have
come from an old machine, language, or compiler - when any of those things changes, all
bets are off. You can never be sure about the effect of an optimization until you
measure the effect. (McConnell, 603)
I bring that up because I think of TDD (and any other practice we might do while
developing) as an optimization, and to be sure about it's effects, I'd have to measure it.
I haven't measured myself with TDD and without, so you can take what follows as anecdotal
evidence only. (Just because I say that, don't think you can try TDD for a couple of days
and decide it's slowing you down so it doesn't bring any benefit - it takes a while to
realize many of the benefits.)
So what rewards have I noticed? Like the problems I've had, there are a few:
Better design: My design without TDD has been a train wreck (much of that due to my
past ignorance of design principles), but has (still) improved as a result of TDD.
After all, TDD is a design activity. When writing a test, or determining what test to write next, you
are actively involved in thinking about how you want your code to behave, and how you want to
be able to reuse it.
As a byproduct of writing the tests, you get a very modular design - it becomes harder to do
the wrong thing (bad design), and easier to keep methods short and cohesive.
Less fear: Do you have any code that you just hate to touch because of the horror it sends
down your spine? I do. I've had code that is so complex and wrapped up within itself that I've
literally counseled not changing it for fear of it breaking and not being able to fix it. My
bet is that you've probably seen similar code.
The improved design TDD leads to helps that to some extent obviously. But there may be times
when even though you've got a test for something, it's still ugly code that could break easily.
The upside though, is you don't need to fear it breaking. In fact, if you think about it,
the fear isn't so much that you'll break the code - you fear you won't know you've broken it.
With good tests, you know when you've broken something and you can fix it before you deploy.
Time savings: It does take some time to write tests, but not as much as you might think.
As far as thinking about what you want your code to do, and how you want to reuse it, my
belief is that you are doing those things anyway. If not, you probably should be, and your
code likely looks much the same as some of that which I have to deal with
(for a description, see the title of this weblog).
It saves time as an executable specification - I don't have to trace through a big code base
to find out what a method does or how it's supposed to do it. I just look up the unit tests
and see it within a few clean lines.
Most of your tests will be 5-7 lines long, and you might have five tests per method. Even
if you just test the expected path through the code, ignoring exceptions and negative tests,
you'll be a lot better off and you'll only be writing one or two tests per method.
How long does that take? Maybe five minutes per test? (Which would put you at one minute per line!)
Maybe you won't achieve that velocity as you're learning the style of development, but certainly you could
be there (or better) after a month or two.
And you're testing anyway, right? I mean, you don't write code and check it in to development
without at least running it, do you? So, if you're programming from the bottom up, you've
already written a test runner of some sort to verify the results. What would it cost to
put that code into a test? Perhaps a minute or three, I would guess.
And now when you need to change that code, how long does it take you to login to the application,
find the page you need to run, fill out the form, and wait for a response to see if you were right?
If you're storing the result in the session, do you need to log out and go through the same process,
just to verify a simple calculation?
How much time would it save if you had written automated tests? Let's say it takes you two
minutes on average to verify a change each time you make one. If it took you half-an-hour
of thinking and writing five tests, then within 15 changes you've hit even and the rest is gravy.
How many times do you change the same piece of code? Once a year? Oh, but we didn't include all the
changes that occur during initial development. What if you got it wrong the first
time you made the fix? Certainly a piece of code changes 15 times even before you've got it
working in many cases.
Overall, I believe it does save time, but again, I haven't measured it. It's just all those little
things you do that take a few seconds at a time - you don't notice them. Instead, you think
of them as little tasks to get you from one place to another. That's what TDD is like: but
you don't see it that way if you haven't been using it for a while. You see it as an
extra task - one thing added to do. Instead, it replaces a lot of tasks.
And wouldn't it be better if you could push a button and verify results?
That's been my experience with troubles and benefits. What's yours been like? If you haven't
tried it, or are new, I'm happy to entertain questions below (or privately if you prefer) as
Hey! Why don't you make your life easier and subscribe to the full post
or short blurb RSS feed? I'm so confident you'll love my smelly pasta plate
wisdom that I'm offering a no-strings-attached, lifetime money back guarantee!
Leave a comment
Sam, truly a fine post. Mind if I link to it from our site? Seems like a good candidate for a "Resources" section.
Posted by marc esher
on Feb 11, 2008 at 10:01 AM UTC - 5 hrs
Thanks Marc. I wouldn't mind at all - in fact, if more people find this useful, I'd be incredibly pleased.
Posted by Sammy Larbi
on Feb 11, 2008 at 11:03 AM UTC - 5 hrs
Very nice post!
I'd just add my pet bugbear about TDD - false senses of security. Tests are like a different form of documentation, they *have* to be kept up to date or they can be not just useless, but actively harmful.
Unit tests have certainly saved my skin on several occasions, but I don't believe they'll ever be a complete substitute for someone actually clicking through things and checking if it "looks right". In the worst case, when you're really in a rush, it can be dangerously tempting to commit a change and even release it without checking, on the grounds that "my tests passed, so it must be ok"..... and then get bitten because you'd never even thought to write a test called
testWhenXisLoadedAndYclickedAndThenZclickedAndThenXTurnedOffAndThen (....repeat for ten or so more levels....) ThenAisVisibleButNotB
To be sure, unit tests are a valuable weapon in the save-my-arse-for-me arsenal, but they're not enough by themselves.
Posted by Al Davidson
on Feb 11, 2008 at 11:17 AM UTC - 5 hrs
I think we can test UI effectively, now that we have web testing tools that use non proprietary languages like Java, C#, Python, Ruby and their respective debugger and IDE.
But using TDD and UIs that is another story.
Some tools have interesting very productive Record Mode or should I say code snippet generator.
So if the UI is not there first, they are useless.
Check out <a href="http://www.InCisif.net
">InCisif.net</a> record mode, and for a quick way to practice TDD for the .NET platform, the open-source <a href="http://www.quickUnit.net
Posted by ftorres
on Feb 11, 2008 at 06:12 PM UTC - 5 hrs
Great post! Coupla me-toos:
1: *Every* time I've forced myself or another programmer to write a test for some trivial thing that didn't really need it 'cause we'd already run the app and we *knew* the change worked - we found a bug.
2: Legacy code - selenium is your friend. Yeah, the tests are brittle. No, they're not exhaustive. But hey, you're clicking through the app anyway, right? Why not just leave the record button on and throw in a verifyText whenever you see something significant.
Posted by Jaime Metcher
on Feb 11, 2008 at 06:23 PM UTC - 5 hrs
Leave a comment
@Al Davidson: You are right. Looking back over it, I could have been clear - certainly TDD is no substitute for acceptance testing!
@ftorres - Do you have any resources that explain good testing strategies for UIs? I've read a few descriptions but didn't get much out of them.
@Jaime - I've had your number 1 item happen to me several times as well. As for number two - I agree Selenium can be a decent testing tool for legacy apps, and you may as well turn it on. But the tests are so hard to maintain, I rarely bother doing it. What I would like to see is a tool that would check code you've changed and find selenium tests that hit any file that calls that code, letting you know what it is. That way, at least you don't have to remember what tests need to be redone (and the tests re-run).
The downside to that, is if you have a class that many pages use, it could still be tedious. Overall, I still regard Selenium in a positive light.
Posted by Sammy Larbi
on Feb 11, 2008 at 07:40 PM UTC - 5 hrs