Wednesday, June 17, 2009

The Software Development Cycle

With my recent contract now complete, I thought I'd post a few non-specific details about what I did during it. While it's not completely germane to Agile, you'll see a few places where there was some overlap, at least.

There were basically three parts to complete as part of the contract:
  • review the organization's current software development process
  • compare that picture to what a more standard development process would look like
  • provide recommendations on changes or enhancements that could be implemented to make things even better
Initially there had been some thought of doing a fourth part, as well: implement some of the recommendations, alongside the development team. However, virtually none of that came to be, through a combination of factors the most relevant of which was that the developers were in the middle of a high priority project that couldn't be delayed at the moment.

In order to get a clear idea of how development happened in this company, I read over a fair number of documents that related either to standards, or to recent projects, within the organization. From that exercise, along with a few strategically-timed interviews with personnel there, I was able to figure out what type of artifacts were produced, as well as which practices were followed. I'd describe what I saw as a fairly lightweight model, which is what I'd expected to find, given the age (fairly young), size (6 to 8 developers) and position within the company (an Information Technology group within an industry that's far removed from being an IT provider). In some areas, it was really only rigour that was missing; in others, there was simply a lack of experience with "best practices for large-scale software development", let's say. And that's the sort of thing you bring a consultant in for, anyway.

For the comparison aspect, my boss suggested a Gap Analysis in the form of a spreadsheet, which I thought was quite appropriate. The U.S. arm of the company had previously brought in-house a very comprehensive System Development Life Cycle model for the American IT groups (which are considerably larger than the Canadian one located here), and I was happy to use that as my baseline for the comparison. This model was very thorough (and very gated, as befits a Waterfall approach) and so it provided an excellent "ideal" for me to measure their setup against. There were roughly 150 artifacts or activities included in it, and so I was able to evaluate my clients' methodology in terms of which of those items they included, and (in a few cases, at least) how their versions measured up to the baseline.

Once the Gap Analysis was complete, I identified what I believed to be the highest priority gaps. This ended up being just over 20 items. For each, I then wrote up a Recommendation, in the following format:

Recommendation: xxx
Benefits: xxx
Approach: xxx
Effectiveness Metrics: xxx

In other words, I provided the what, the why, the how, and some notion of how to measure the results to make sure that they weren't wasting their time making the change.

That Gap Analysis spreadsheet, which included the 20-some Recommendations, was the primary deliverable from my contract. However, in talks with my boss we decided that a couple other concrete artifacts would be desirable to come out of this, as well. The first was a Microsoft Project Plan that was essentially a skeleton Plan that included all of the steps from the "ideal" SDLC model, customized somewhat to their environment, in order to make it easier for all of the steps to be remembered in the future. Also, because I considered a thorough and comprehensive Release Plan to be a prime example of where some additional rigour would help them out, I provided a template for that, too.

Those three documents were reviewed with most of the development staff on my last day there, and were extremely well-received. I had tried to incorporate as much feedback from the staff as I could in the content of my deliverables, and I think they appreciated that. One example was the inclusion of a recommendation that features be planned in a way that allowed them to be completed more serially than they had in the past, so as to allow at least some testing of some of the features before being passed on to the business area (who, in that environment, act as "QA"). That, and some of my unit testing suggestions, definitely benefited from my two years as Agile Manager!

All in all, it was a good learning opportunity for me, and I think that the clients received good value from my 22 years of IT experience. I think that if the timing had been different, such that the people there who needed to play a bigger part had had more time available, then it could have gone better and more immediate gains might have been realized. But as the poker enthusiasts say: you have to play the hand you're dealt, and that's what I did.

Sunday, June 7, 2009

Separating The How From The What

Poor AgileBoy!

One of the most common forms of "oppression" that I used to hear about in my role as Agile Manager involved Product people (or other representatives of management) going well beyond their mandate of providing "the "what" as they ventured into the thorny territory of "the how". I doubt that it was ever intended maliciously, no matter how much conspiratorial spice was attributed to it by the development team members... but if that's true, then why did it happen?

First, though, I suppose I should explain what I'm talking about. In most Agile methodologies, there's a principle that says that the Product Owner indicates what the team should work on - in the form of a prioritized Product Backlog - and the team itself figures out how best to deliver it. Thus, "the what" is the province of Product and "the how" is up to the team. It seems like a pretty sensible, straightforward arrangement, and yet it often goes off the rails. Why?

One cause for a blurring between the two can be Product people with technical backgrounds. We certainly had that in spades with our (aptly named) Technical Product Owners, most of whom were former "star coders" who had moved up through the programming ranks into management earlier in their careers. I'm sure it must have been very difficult for some of them to limit themselves to "the what" when they no doubt had all kinds of ideas - good or bad - about what form "the how" should take. In the most extreme cases, I suspect that there were times when our TPOs actually did have better ideas on how to implement certain features than the more junior team members did... but dictating "the how" still wasn't their job. In that sort of setup (where the Product area is made up of technical gurus), what would have been preferable would have been an arrangement where those TPOs could have coached the team members on design principles and architectural considerations in general, as part of the technical aspect of their title. That healthier response would have still accomplished what I assume was their goal - leading the team to better software choices - without overstepping their bounds and causing a schism between product and development.

Another factor that can result in people not responsible for "the how" trying to exert their influence on those who are, is a simple lack of trust. Again, I saw this in action many times as the Agile Manager. I'm not going to paint a black and white picture and pretend that it was always warranted, nor that it was never warranted. But the important point here is that dictating "the how" downward in that manner usually did more harm than good, and it certainly did nothing to close the trust gap. For one thing, having "the how" taken out of their hands gave those developers who were already skeptical about management's support of Agile more ammunition with which to say, "See? It's all just lip service! We're not really being empowered here!" It was also true, in some cases at least, that "the how" being supplied wasn't particularly well suited to the new development environment that was emerging. What with all of its automated tests, refactoring and focus on peer reviews, the code was resembling what "the old guard" remembered less and less with each passing Iteration. I experienced this first hand, as I'd make some small suggestion about how to tackle something that I thought might be helpful, only to get a condescending look and "Uh, it actually doesn't work like that anymore" from someone half my age. Yeah, that's tough on the ego, but it's also completely the right response.

As for why the trust gap was there in the first place, well... as both of my AgileMan books lay out in gory detail, lots of mistakes and missteps were being made. It wasn't exactly smooth sailing, let's say. The result of that, however, was that some among management apparently felt that the teams couldn't be trusted, and so they attempted to exert more and more control over them, in whatever ways they could. Had they simply applied that energy to improving the requirements gathering techniques being used within our organization, though, it would've benefited us more. Providing clear sets of Acceptance Criteria and good, well thought out constraints would certainly have done more to improve the product features that our teams were developing than any amount of "meddling" in the mechanics of the software development process itself ever did.

I imagine that it's often difficult to keep "the how" separated from "the what" as you work in an Agile environment, but it's critical that you do. After all, the people doing the work are in the best position to know how to do it, and that principle has helped make Toyota one of the premiere auto manufacturers in the world. We could do a lot worse than to follow their example in the Information Technology business!