Friday, April 24, 2009

Experimentation Takes Discipline


One of the key principles of any Agile methodology involves "inspecting and adapting." For those who are in the position of Agile Coach or Scrum Master, that sometimes gets represented in the mythical handbook as "encouraging experimentation." I personally think that it's one of the least understood and (perhaps accordingly) most poorly-applied Agile concepts, for a variety of reasons.

For most of us, success is almost always a goal. We want to do well at whatever we take on. And why not? Our performance reviews focus on how successful we were in various aspects of our job; our sense of self-worth is often dependent on how successful we believe others perceive us to be; and it just simply feels much better to "win" than to "lose." Unfortunately, though, experimentation brings with it the ever-present prospect of at least the appearance of failure, especially if one isn't careful about how one approaches such things. For example: if you take a bit of a gamble on something new in the course of your job (or personal life) and can see only two possible outcomes - it either works, or you're screwed! - then it's not hard to imagine that a "bad" result might make you less likely to ever again try something like that in the future. The problem could be that there was simply too much riding on the outcome to justify the risk in the first place, or that you didn't frame the experiment properly. For the latter case, a better setup would have been: "either this works and I have my solution, or I learn something valuable that gets me closer to finding the right answer." As long as learning and moving forward from your new position of greater enlightenment (what some short-sighted folks might indeed call "failure") is an option, then experimentation is a good avenue to go down.

What that implies, however, is that there's at least some degree of discipline involved. Randomly trying things until you find one that works, for example, isn't experimentation. What's required are some parameters that say, right from the start, what you're intending to learn from the experience. Determining whether your current architecture can support the new load that you're considering adding to it is an experiment that you might undertake, but ideally you'd like to get more than simply a "yes/no" answer from it. "Yes" is fine as an outcome (woohoo!), but you'd really like to elaborate on "no" with some results that provide insight into "why not?" and maybe even "where's the problem?" And therefore you'd design your implementation of the experiment with that in mind. Similarly, if you were on an Agile team and were going to try out pair programming as a possible practice, you'd want to establish a way of gathering results that provide you with lots of data. Subjective observations from those involved would be good, but so would some form of productivity measurement that would allow you to see - more objectively - whether pairing up team members causes the team's velocity to go up, down or stay about the same.

The final thought I have on this topic involves the notion of sticking with it long enough to actually get a result. That may sound obvious, but I've lost track of the number of times I've seen people start to try something new and then abandon it partway along. One way to avoid this is to establish, right at the outset, what the duration of the trial will be, as well as how the results will be gathered and measured. Among other things, that sets expectations and can prevent the impression of thrashing that might otherwise form in the minds of those who didn't realize that it was only an experiment in the first place. This could be a team trying something new and making sure that their Product Owner understood that it might "only" lead to learning, or a group of executives trying out a new organizational structure with the intention of re-assessing it after six months to see if it was working.

When I was asked at a recent speaking engagement whether I thought Agile was suitable for industries outside of software development, I mentioned that I thought scientific research, for example, had always been "very Agile." The very nature of good scientific work, after all, is to form a hypothesis and then conceive a test - or series of tests - to either prove or disprove it. The best scientists understand that sometimes you can learn more from a "failed experiment" than you can from one that gives you the results that you expected, and that that's a good thing. But that's only true if you're still paying attention by that point, rather than giving up or banging your head against the wall.

Wednesday, April 15, 2009

The Story Point Workshop


I've been meaning to write up a description of the Story Points Workshop that I designed and delivered in 2007 and 2008, but hadn't gotten around to it... until now. I figure that there's always a chance someone will stumble across this blog and be interested in getting some training on Story Points, and so I really should have something here by way of a sales pitch. If you're reading this and would like to know more, please contact me at AgileMan@sympatico.ca and we can discuss it further.

The workshop that I created is intended to be appropriate for anyone involved in the development of software: programmers, testers, integrators, project/program managers, product owners, team coaches and even executives. In other words, you have to possess at least a basic understanding of the process of software development in order to be able to complete the exercises. Note, however, that you don't have to be of a technical bent, as there's nothing in the workshop that requires you to design, program, or even debug any software.

My goal in providing the workshop has always been to increase the attendees' understanding of, and comfort with, Story Point estimates. At the end of the day, I want everyone who took part to feel confident that they can use Story Points themselves - either as producers of them, or consumers - as part of the release planning process.

To get a large group of people, often with disparate skills and perspectives, all the way to that goal, I use a format that's proven to be both effective and fun (based on the results of feedback surveys that I've done at the conclusion of each session). Basically, the day-long workshop goes as follows:

Part 1: Introduction of Story Points

Here I cover the concepts of relative sizing (as compared to absolute) and what Mike Cohn characterizes as "Estimate size; derive duration." I do this through a combination of slide presentation and interactive exercises, the latter of which allow the group to do some estimating on things not at all related to software (eg. figuratively moving piles of dirt). This 60- to 90-minute section ensures that everyone has a solid foundation of understanding around the conceptual portion of Story Points before we move on to applying that knowledge.

Part 2: Estimating in Story Points

For this part of the day, which takes up the majority of the workshop, I introduce a new application called "the Meal Planner." This is a Java program that I wrote years ago, but the attendees don't really see it or any of the code for it. Instead, what they see is a description of what it is, as well as a set of additional features for it. Because the Meal Planner is a simple and fun concept that anyone can grasp within a few minutes, it provides a perfect backdrop to use in understanding how to Story Point items. No one in attendance has any history with it, and therefore it's unlikely that anyone would dominate the proceedings based on any real or imagined "expertise" with it.

The attendees are then broken up into smaller groups of 5 to 7 people and physically separated (by team) as much as possible. Each team is tasked with coming up with Story Points for each of the items on the Meal Planner's product backlog. I serve as the Product Owner for each group, running back and forth between them in order to answer questions or offer clarifications to the requirements that they've been given. This exercise typically takes between two and three hours to complete, as the teams often get off to very slow starts while they go through the "forming/storming/norming/performing" cycle.

Part 3: Comparing the Results

When all of the teams have produced their Story Points for the complete set of backlog items, I bring everyone back together, post their results up on the walls and encourage each team to review what the others came up with. Then I have a representative from each team briefly go through their rationale for each estimate, allowing the other teams to comment or question them. Sometimes the teams will have arrived at similar estimates; often they'll be quite different. We discuss what sort of factors contributed to those results, and build up a list of "What Worked and What Didn't" that help the attendees understand what sort of things contribute to better estimating versus what works against that goal. I usually will also provide my own Story Point estimates (after everyone else has), based on having written the application. All told, this section usually requires about an hour to get through.

There's a slight variation on the above that I've done in one instance, which proved quite insightful for all involved. I had 2 groups in that case, and so I decided to spend most of my time with one, at the expense of the other. Also, I'd planned it so that the "AgileMan-starved" group got a different set of requirements than the other group, with slightly less details in it. The results of this disparity were striking, to say the least: in many cases, the "starved" team produced estimates that were completely at odds with what both the "better-fed" group came up with as well as the "expert's Story Points" that I provided. I think this sort of thing is probably only appropriate if the workshop is being set up for an environment where the product people are struggling to fulfill their responsibilities (as was the case where I did it).

It's possible to get all of the above done in as little as about four and a half hours, although that's awfully rushed and not very conducive to learning. I prefer to schedule it for either five or six hours, in order to allow for regular breaks and to give people the opportunity to reflect on developments as we go along. Something like a 9:30 start, with lunch brought in, and a wrap-up by 4:00, works best. I've done the workshop with as few as a dozen people, and as many as 25 attendees, and both extremes seem to work fine. With fewer than 10 to 12 people, it's difficult to get 2 groups that can compare results. More than 25 would provide quite a logistical challenge for me to facilitate once the smaller groups are formed.

When I took this workshop on the road to a company in the San Francisco area in late 2007, I did 3 sessions there (one per day) and received an average score of 8.7 (out of 10) in terms of "overall quality." One of the company's Vice-Presidents recently recommended me to a colleague based on those workshops, saying that I had "received the most positive reviews from folks that any trainer has at our site, and we've had quite a few." So I like to think that it left a good impression!

Friday, April 10, 2009

"My Life As A PM Is Over: The Company Just Went Agile!"

Last night was my presentation on Agile Project Management to the local chapter of the Project Management Institute, and based on the electronic polling results that were collected right after I finished, I think I did fairly well. Each member of the audience was given a handheld device about the size of a small calculator and then asked to respond to a series of statements along the lines of: "The topic presented by tonight's speaker was informative and interesting" and "Tonight's speaker appeared to be well-prepared" by selecting, on their device, either "1" (strongly agree), "2" (agree), "3" (disagree) or "4" (strongly disagree). For those who were having trouble figuring out how to work their new toys (since this was the first time they had tried this approach), I helpfully offered the advice, "Just press the button labeled '1'!" In most cases, I received scores where the total of "1"s and "2"s put me at or above the 80% mark, which was gratifying to see.

As for the material itself, I had worried that I had too much to cover in 2 hours, but somehow it all worked out, even with a 15-minute break midway through. I spent the first half introducing Agile, and then called for a short break. However, I asked that, during the break, everyone write down how they thought they might react if they went into work on Monday and their boss walked up to them and announced, "Guess what? We're going Agile in a month!" Just before resuming, I collected their responses and read them out for everyone to hear. Here are some representative samples of what I got:
  • "Agile? Can we get the Functional Managers to support it?"
  • "Good, [but] first educate the clients."
  • "Let's go!"
  • "Can we still perform accurate Earned Value analysis?"
  • "Great, but how do I co-locate with my global team?"
  • "1 week cycles??"
  • "But what's wrong with RUP?"
  • "Great, let's try it. As long as we follow ALL of the steps, not only parts of it."
  • "How do we break deliverables into iteration-sized chunks?"
  • "Risk of scope creep."
  • "How do you budget??"
  • "What are the signs that we NEED Agile?"
and my personal favourite:
  • "Do Agile? We don't even do Project Management!"
What impressed me most about these results, aside from the variety, was that it was pretty clear that everyone had been paying attention to what I'd been saying up to that point and attempting to rationalize it against their own experiences and work environments. Virtually no one was writing off the notion of "going Agile" as some crank thought experiment that they could laugh off or simply ignore. Also, I'd say that about 1/4 of the responses were of the "Let's go!" variety, showing that there was at least some buy-in among a group that was mostly hearing about Agile for the first time.

During the Q&A session at the end, many of those same concerns were expressed once again. One woman was sure that no budget would ever be approved without a guarantee that the work done would match what was being approved, to which I said, "If that sort of arrangement is working out for you right now, then I don't know why you'd ever want to move to Agile. The organizations that move to an Agile methodology are the ones where things change considerably between initial concept and final installation." Another fellow was understandably concerned about how to create automated tests to replace the extensive manual testing that they currently do at the end of each project, and I said that that was one area where you'd want to allocate time in order to research the new products and services that are coming out on the market for that purpose. "But in the end," I said, "you really don't have any choice: you need to have automated testing, or you'll never be able to do short cycles." I also got asked whether Agile was really only appropriate for "UI companies", to which I was able to describe some of the very-different types of organizations I've heard of - or seen! - doing Agile.

The moderator eventually had to call an end to the Q&A, as I think it might've gone on for quite a while longer otherwise. I received a $25 Chapters gift card for my troubles (which was nice, and unexpected) and eventually took off home to unwind. Sadly I didn't make any book sales, despite having several copies with me for just that reason. Maybe next time!

Friday, April 3, 2009

On The Technical Debt Topic

Just a quick note to point interested readers toward this very good post on that subject. It's well worth the 5 minutes or so that it'll take you to read.

Wednesday, April 1, 2009

The Project Manager "Problem"

I've been invited to speak at the next scheduled meeting of the local chapter of the Project Management Institute. It was requested that I make my presentation "about Agile" but the rest of the direction was pretty much left to my imagination and questionable discretion. As such, I've come up with material under the banner of:

"My Life As A PM Is Over: The Company Just Went Agile!"

Now, that title is most certainly intended to be attention-grabbing, humourous and perhaps even a little ironic. That latter characteristic stems from the slightly-nebulous view in which some in the Agile community hold Project Managers. At one extreme fringe of perspectives on the topic is the one that says, essentially, "Now that you've gone Agile, you can get rid of all your PMs!" A more moderate stance shows up in some of the Scrum literature, which advocates for having most or all of your Project Managers transition into the Scrum Master role. All the way at the other edge of possibilities lies the approach that I saw undertaken while I was the Agile Manager in my last job: keep the PMs as PMs, and figure out how to make that predictive, deadline-focused paradigm work in an iterative, adaptive environment. (Those who've read my two AgileMan books will recall that, many good intentions notwithstanding, we struggled greatly in integrating those two very different world views.) Even with that attempt to keep things somewhat as they were in the PMO, though, I think all of the people involved would agree that the PM role changed considerably as we moved through our Agile transition. Therefore, in all cases, there's some bitter irony to saying, "My life as a PM is over..."

For my material, I'm starting with what I presented to the 3rd year Project Management course last November. There's quite a bit of overlap, it seems to me. In both cases, I need to start off by describing what Agile is, under the assumption that some won't have even heard of it. Then, I'd be foolish not to acknowledge right away that many of the principles and practices of Agile can, and probably should, strike some project management types as idealistic, at best, and heretical, at worst. And finally, I want attendees to get a sense of how the role can still add value and be rewarding within an Agile framework. The biggest difference, of course, is that next week I'll be talking to people who have presumably been out in the real world actually running projects for a number of years now. That should mean that they're even more cynical, or at least skeptical, about the practicality of an adaptive approach. Will customers/product areas ever really sign up for a system in which change is expected throughout the process? Won't scope creep simply cause projects to run on forever? How can you let those developing the software also test it? And so on.

In providing feedback to Mike Cohn on his next Agile book recently, one of the places where I really challenged him was around his treatment of what to do with Project Managers. I thought that the draft version I reviewed made too light of the problem, not acknowledging just how much of a leap of faith is required in several aspects of his recommendation. If a company's culture has a strong dependency on micro-tracking of deliverables, who's really going to take that up if the PMs are re-purposed to other duties? Will most PMs, some of whom have spent considerable time and money getting their PMP certification, really want to risk losing it by directing themselves to activities which won't keep them current as PMs? How many individuals in the PMO really want to become Scrum Masters / Agile Coaches? I think those are just some of the vital questions that need to be addressed on this topic, and I believe too many of us have glossed over them in the past. I may not know the answers, but at least now I'm aware that the discussions have to happen.

It should be interesting to see how all of this goes over with the group next week! If you never hear from me afterward, you can draw your own conclusions...