Friday, April 24, 2009

Experimentation Takes Discipline


One of the key principles of any Agile methodology involves "inspecting and adapting." For those who are in the position of Agile Coach or Scrum Master, that sometimes gets represented in the mythical handbook as "encouraging experimentation." I personally think that it's one of the least understood and (perhaps accordingly) most poorly-applied Agile concepts, for a variety of reasons.

For most of us, success is almost always a goal. We want to do well at whatever we take on. And why not? Our performance reviews focus on how successful we were in various aspects of our job; our sense of self-worth is often dependent on how successful we believe others perceive us to be; and it just simply feels much better to "win" than to "lose." Unfortunately, though, experimentation brings with it the ever-present prospect of at least the appearance of failure, especially if one isn't careful about how one approaches such things. For example: if you take a bit of a gamble on something new in the course of your job (or personal life) and can see only two possible outcomes - it either works, or you're screwed! - then it's not hard to imagine that a "bad" result might make you less likely to ever again try something like that in the future. The problem could be that there was simply too much riding on the outcome to justify the risk in the first place, or that you didn't frame the experiment properly. For the latter case, a better setup would have been: "either this works and I have my solution, or I learn something valuable that gets me closer to finding the right answer." As long as learning and moving forward from your new position of greater enlightenment (what some short-sighted folks might indeed call "failure") is an option, then experimentation is a good avenue to go down.

What that implies, however, is that there's at least some degree of discipline involved. Randomly trying things until you find one that works, for example, isn't experimentation. What's required are some parameters that say, right from the start, what you're intending to learn from the experience. Determining whether your current architecture can support the new load that you're considering adding to it is an experiment that you might undertake, but ideally you'd like to get more than simply a "yes/no" answer from it. "Yes" is fine as an outcome (woohoo!), but you'd really like to elaborate on "no" with some results that provide insight into "why not?" and maybe even "where's the problem?" And therefore you'd design your implementation of the experiment with that in mind. Similarly, if you were on an Agile team and were going to try out pair programming as a possible practice, you'd want to establish a way of gathering results that provide you with lots of data. Subjective observations from those involved would be good, but so would some form of productivity measurement that would allow you to see - more objectively - whether pairing up team members causes the team's velocity to go up, down or stay about the same.

The final thought I have on this topic involves the notion of sticking with it long enough to actually get a result. That may sound obvious, but I've lost track of the number of times I've seen people start to try something new and then abandon it partway along. One way to avoid this is to establish, right at the outset, what the duration of the trial will be, as well as how the results will be gathered and measured. Among other things, that sets expectations and can prevent the impression of thrashing that might otherwise form in the minds of those who didn't realize that it was only an experiment in the first place. This could be a team trying something new and making sure that their Product Owner understood that it might "only" lead to learning, or a group of executives trying out a new organizational structure with the intention of re-assessing it after six months to see if it was working.

When I was asked at a recent speaking engagement whether I thought Agile was suitable for industries outside of software development, I mentioned that I thought scientific research, for example, had always been "very Agile." The very nature of good scientific work, after all, is to form a hypothesis and then conceive a test - or series of tests - to either prove or disprove it. The best scientists understand that sometimes you can learn more from a "failed experiment" than you can from one that gives you the results that you expected, and that that's a good thing. But that's only true if you're still paying attention by that point, rather than giving up or banging your head against the wall.

1 comment:

Derek Neighbors said...

If you aren't failing everyday, you probably are not very agile. You can't do something better than you are currently doing it if you don't "try".

Trying assumes that there is a possible outcome of failure. I think that the biggest "risks" yield the greatest "rewards". So, the more wild the experiment likely the better yield when successful.