Thursday, May 31, 2007

Metrics in the Lean and Agile World

I ran across an interesting list of metrics from this site. The author of the post had gathered these in a brainstorming sessions with his organization on defining potential metrics on how they were doing with Agile. The author also poses a question to his readers on what metrics could be used by the team to possible deceive management. It's an interesting read. Here's the list of metrics that they came up with:

Short Iterations - Planning Game, Variable Scope:
* Relative velocity (measures accuracy of estimation and amount of developer time spent on other activities)
* Number of stories planned for the delivery vs number delivered
* Overflow of stories (measures accuracy of estimation)
* Degree of story change (measures activity in the business area and stability of business process)
* Number of problems fixed from previous iteration vs number of new stories implemented

Continous Integration:
* Time to build
* Number of successful/failed builds

Available Customer:
* Hours spent with developer/customer per week
* Speed of feedback vs resolution
* Number of raised features vs number of implemented features

Refactoring:
* Lines refactored
* Time spent refactoring vs new code
* Most frequently changed files
* Number of files changed per feature

PM Tools:
* Effort/time spent per feature
* Actual effort vs expected effort

Frequent delivery into Production:
* Amount of defects not caught by test/review

Regular demos for feedback:
* Degree of incorporation of feedback into iterations (measures accuracy of story gathering and accuracy of implementation)

Lessons learned for each iteration:
* Traceability of lesson as applied to future iterations/practices

Test-Driven Development:
* Degree of test case coverage
* Test case failure rates


Many managers would look at this list and think it's pretty good. In fact, they probably would think of some other measures to add to the list. However, what is the return for that investment (ROI)? Some of these metrics might be easy to gather, others could require more tools or manual effort to get. Even if you do get the information, how are you going to use the information? If you don't know how to interpret the data, you may find yourself managing the wrong things.

For these reasons, I have always struggled with finding useful metrics that are easy to gather and are promoting positive behavior. Metrics seem to be used more to punish bad behavior than to encourage better behavior. Lean software development advocates are pushing to measure up. What does that mean? Metrics are typically managed at too low a level in the organization. Therefore, companies come up with dozens of metrics (similar to a list like above) and ultimately get consumed with too many things to keep track of. Therefore, why not find metrics that can give you just enough information that measures things in terms of customers and deliveries? What if these metrics would encourage position behavior while helping you know where to look in more detail for inefficiences?

Following are the measures that I am considering in our adoption of Lean and Agile. Note, I haven't tried these out yet as the focus right now has been more on getting the practices and principles down before looking at efficiencies. However, based on my reading and understanding of both Agile and Lean here is my list:

Story Cycle Time (Measuring One Piece Flow): Average time from Customer Request to Customer Implementation for each Story. Encourages smaller pieces (Stories) that bring value. Encourages finding ways throughout organization to deliver those pieces faster.

Iteration Efficiency (Measuring Value vs. Waste): Percentage of Time spend on Value (Customer Features) vs. Waste (Defects, Refactoring, anything else the customer doesn't believe provide value). Encourages those activities that the customer sees value. Discourages those activities that the customer sees as waste.

Iteration Return on Investment (ROI) (Measuring Value vs. Cost): Ratio of Total Value (Customer and Business Value) against what it took to implement stories. Encourages the prioritization of things based on not just value but against the cost of implementation. Confirms to the team that they are working on the most important things.

Story Acceptance Measure (Measuring end user acceptance): Average acceptance (user rating) of each Story after it has been implemented. Encourages making sure the customer needs were really met. Encourages quality of solution from a customer standpoint. This may be the most difficult to gather because it requires polling at least a good sample of customers.

What do you think? Did I miss any measures at the organizational level?

Wednesday, May 30, 2007

Team Power

In both Agile and Lean circles, there is constant emphasis on the team rather than individual efforts in resolving issues, making decisions and suggestions for improvement. "Two heads are better than one" rings true! Over at LeadingAnswers blog, the latest post called "Team Solving: The Under Utilized Power" demonstrates this fact. Here is a synopsis of the findings:

Power #1: By asking the team for a solution we inherit consensus for the proposal. The fact that the solution came from the team and not me, meant I did not have to sell it, it already had support. It is easier to steward a sub-optimal solution with good support to a successful outcome than build support for an optimal solution and ensure its successful execution. If challenges arise and people subconsciously ask “who’s bright idea was this?” the answer comes “Oh yeah, it was ours” and they are more likely to continue.

Power #2: Accessing a broader knowledge of the facts. Team members are closer to the details and bring additional insights other than your own. I did not know that the business folks bowled, this was collective knowledge (Chris knew three users bowled and Julia knew two others did also) together we found a new piece of useful information. Asking the group taps the collective knowledge of the team.

Power #3: Solutions are practical. Anyone who has worked hard to craft a solution only to be told “That will not work here because...” will know how frustrating and disheartening these words are. Team sourced solutions have been vetted for practicality and, because created internally, solutions found for implementation issues.

Power #4: When consulted people work hard to generate good ideas. The simple act of asking for suggestions engages team members beyond the role of “Coder” or “Tester”. People appreciate having their inputs valued and generally work hard to create innovative and effective solutions. Treating workers as interchangeable resources is a poor model inherited from the command-and-control methods of the industrial revolution. Leading companies such as Toyota and 3M recognize that their best ideas come from inside their companies and we need to make use of this intellect. It is partly due to these methods that these companies innovate better, have higher quality products, and better labor relations.

Power #5: Asking for help shows confidence not weakness. Asking for ideas and solutions to problems is not a sign of incompetence or an inability to manage. Just because we ask for input it does not follow we are dumb. Instead it demonstrates valuing the opinion of others, and being thoughtful. In essence it demonstrates how all problems should be tackled, which is the next power.

Power #6: Seeking ideas models desired behavior. Project managers have a leadership role of “modeling the desired behavior” i.e. behave as we wish others to behave. If we stay silent, make decisions with incomplete awareness of the facts, and do not ask for help when we need it, what message is this sending to the team? Well, it is an obvious message that we expect team members to behave the same way and work in isolation. Lots of time and money is wasted on team building activities that are then eroded my management-in-a-vacuum.


Here are also some cautions where the team can actually lose power:

Caution #1: Real problems - We should use this on real problems only, not on which brand of printer toner to buy. Remember we are modeling desired behavior and if people take it too far and consult their team mates when deciding what color dialog box to create; no work will get done. It is a tool for when you are stuck and the problem is important.

Caution #2: Poor team cohesion – If the team is fragmented and has opposing groups then resentment for “fixing their problems” will undermine the process. We need to get the team aligned for team solving to be most effective.

Caution #3: Team and Project Changes – Over a long period if a significant portion of the team changes, we need to re-canvas the team to ensure they are still on-board with the approach. Exercising the bright ideas of others is nearly as bad as not being consulted; we need to ensure people still agree this is a good policy. Likewise if the project changes significantly, we need to checkpoint in light of these new facts and get the team to review the approach.

Caution #4: Follow-through – once you ask for solutions make sure you follow through on them with execution. It is pretty demoralizing to be asked to work on a solution and then see it wither. It is fine to go back with implementation problems that need to be solved, but don’t waste peoples time by asking for input and then ignoring it.

Friday, May 25, 2007

Exploring what makes Scrum successful

Alan Shalloway from the Net Objectives Thoughts blog is digging a little deeper into the "soul" of Scrum to truly understand why Scrum works in his post "Challenging why (not if) Scrum works". Please read his post in more detail, but here is some reasons he came up with:

I have repeatedly heard that “Scrum succeeds largely because the people doing the work define how to do the work” (see From The Top by Ken Schwaber for his full text). However, I do not think that is even a third of why Scrum works - and be clear, I am certainly not questioning that Scrum works - I just want to get at the why it works.

OK, so why does Scrum work? Remember, I am not challenging that it does, just asking why. I would say Scrum works for the following reasons:

* the iterative nature of Scrum development (including the re-assessment of priorities between iterations)
* the workcell nature of the Scrum team (everyone who needs to work together is together)
* the fact that many Scrum teams are given the chance to co-locate
* that most Scrum teams work on one project at a time
* that many old, cumbersome checkpoints/forms/status reports that used to be required are abandoned because the team is doing something new
* a focus on defining acceptance tests concurrent with requirements
* the availability of a product champion (product owner to Scrum enthusiasts)

I think this is a really good list, I would also add a couple of items to it:

* frequent communication of status and removing of impediments through the Daily Standups
* continue emphasis on feedback and improvement through sprint reviews
* the team nature of the Scrum team (everybody on the team should understand and commit to the sprint through sprint planning)

For those that are using Scrum, you are well familiar with these benefits. For those that have not moved to Agile or they are having problems with their practices, consider looking at Scrum to gain these benefits. Scrum by no means is a "silver bullet", but it has already proven to be very successful where it has been implemented.

Thursday, May 24, 2007

Balancing Perfection against Innovation

Jeffrey Phillips from Thinking Faster is struggling with finding the balance between the desire to have perfection with products against the desire (sometimes by the same people) to provide innovative products. His post, "Trading off perfection and innovation" shows us a comparison of the two. Here's just part of his thinking on this (read more of his post to get everything):

One one hand, we have the constant demand for perfection. Six Sigma, Lean, eliminating mistakes and errors to create an incredible, consistent, useful product or service. On the other hand, the need to innovate, to create something new and take new risks to dramatically change a product or market. Both approaches are correct, and relatively mutually exclusive. What does a manager do?

When choosing perfection, we make a series of assumptions about the product or service, its competitors and our ability to keep up with the market. Once a product or service is perfected - it it every really is - even a small change to the process or product can cause major disruptions for a customer, so we prolong the life of any product or process and minimize changes to achieve perfection. If you can dominate your market, or if perfection is important to your customer, then this is a great strategy. However, it assumes that there will be little or no disruption in the market, and that the customer demands perfection over innovation and new capabilities.

When you choose innovation, on the other hand, you will not be able to provide a perfect product or service. If your innovations are truly new, you can't determine in advance exactly what "perfection" is - you can only try to provide what you think customers will value and iterate from that point once you discover and uncover real wants and needs. This approach assumes that customers don't demand perfection, but do demand interesting, innovative products and services that solve unspoken or unmet needs. After all, if you can solve a problem that I have even fairly well that has never been solved before, I won't ask for perfection yet.

The real crux of the matter is that if your position is perfection, then you can't afford to make mistakes, and if you do, you need to quickly and completely correct them. On the other hand (aren't you glad we only have two hands) if your position is innovative, you can't afford not to make mistakes, and you want your customers to participate in and help you learn from your mistakes - you can't learn if you don't make mistakes.


In the end, you have to give up some level of perfection if you want something innovative. Because the very definition of innovative is doing something that hasn't been done before (or at least not that often). You also have to be able to take on risks and make mistakes as part of the experimentation that comes with innovation. I have to agree with him. Too many times, I have seen people take the "tried and true" approach to a solution just to ensure they don't cause inefficiencies in hitting dates. Even though they would agree there is a possible better solution, they aren't willing to pay the price to get there. I like Jeffrey's thinking here, your customers need to understand the tradeoffs. The only way for them to do that is to be more part of the process. For companies that can figure out how to do this, those will be the ones who provide innovative products that their customers believe is more than "good enough".

Wednesday, May 23, 2007

Commitment, Delivery and Customer Expectations

Since we have been on the road of adopting Agile and Lean in our organization, I have had increasing pressure from stakeholders to give them an idea of when we will deliver functionality beyond the current release. Our releases are currently around 4-6 months at this point, so they are asking for things that will be delivered towards the end of the year and into next year. One stakeholder wanted to go out even further than that (up to 24 months in advance)!

At first, I thought that they just wanted to get an idea internally by developing a high-level product road-map to help us with our priorities. However, my thoughts were that this road-map wasn't going to focus on dates. Why? Because that information would not have any sense of reality. This information is at best high-level and with that not enough knowledge and information to make a better prediction of what could be delivered. That better prediction won't happen until we do release planning for the release that we are considering the future functionality. Even with that said, we really won't be able to commit to dates until we get really into the details of the work during iteration planning.

So, there are really three levels of precision: "fine grain" in the current iteration (90% or greater chance of stories in the iteration getting done), "coarse grain" in the current release (60% or greater chance of stories in the release getting done - this number could go down if our releases were even shorter), then "swags" in future releases (40% or less of a chance of stories getting done in the expected time period - this number going down the further out you go). These aren't exact numbers but you get the idea.

What really worried me is that they wanted these long-term dates so they could make commitments to key customers on when future functionality would be delivered. When I discussed my concerns with the stakeholders and talked about the various levels of precision, they believed we could actually get better at our precision if we just planned a little more. However, that would become waste in Lean terms because we would have a lot of work (inventory) "in-progress", plus what we know now about something that will be started six or more months in the future WILL change once we get there. Even with all of that said, my greater concern was making commitments to customers before we can make commitments with the development teams that they can deliver. It should be the other way around in my opinion. Let's figure out what we can commit to and deliver, then start setting expectations to customers.

One of our vendors lives by this philosophy. If we ask for an enhancement, if it isn't something they are committing to for the next release, they will give the answer "we'll put this in our backlog of priorities and consider this for a future release". Then, they will focus my attention of things of value that are coming in the next release and get my feedback on them. Even if my particular request isn't there, I find great value in what is about to be delivered. Sure, there may be some features I don't plan on using or care about but there are usually features that I care very much about. As a customer, this vendor is setting my expectation based on what they know they can deliver, then based on what they hope they can deliver. Hopes don't cut it! If they had promised me a future date, and then miss that date I would have been much more frustrated as a customer. I would rather not know until they were really confident they could hit that date. I appreciate that this vendor has delivered what they promised and has built a reputation with me that they are making constant improvements to their product. Even if those improvements aren't everything that I have asked for!

This is just one of the bad habits we need to break by moving to Traditional development over to Agile. It wasn't that we were better at providing estimates and hitting dates using Traditional development, because we weren't (especially for features out several releases). Sure, we could try to hit those arbitrary dates by cutting scope, freezing scope changes, adding people, working overtime, cutting corners on quality, and other "tricks of the trade". However, would we really provide what the customer really wants? Isn't the goal of Agile to provide what the customer finds value, even if the cost of that value is sometimes difficult to determine? If the quality isn't there, they don't want it early. If the functionality doesn't really do what they expected, they don't want it early. Why this importance of hitting dates because of commitment to customers, when in the end the customers will be disappointed because what they got in return missed their expectations? Let's commit to dates first as a development team and "internal" customers, then set expectations to our "external" customers as we deliver what is important to them.

I believe we got into this dilemma because we developed a reputation of over-promising and under-delivering (because of the problems with Traditional development). Plus, what we would deliver (usually 8-12 months) didn't really cut it with the customer and left them disappointed and frustrated with how long things take. Therefore, their lack of trust in hitting dates causes them to even push harder for us to commit to completing functionality that is long overdue. We need to build trust again with our customers by delivering high-value and high-quality features more frequently in order to gain back a reputation of under-promising and over-delivering. This will take time, but in the interim we need to change the way we have set future expectations with our customers. This may cause some short-term frustration, but will quickly change as customers will change their focus as well on what is being delivered then what is promised to be delivered. After all, isn't that what really counts?

Monday, May 21, 2007

The Case for Story Point Estimates

I have been a fan of story points ever since I attended a seminar several years ago where Mike Cohn presented the concept. I never really trusted other estimating practices such as function points and time-based estimates. Why? Software development projects are rarely similar from project to project, yet these practices focused entirely on past experience. Therefore, to get a "reliable" estimate of time for every new project you needed to gain a lot of experience. In other words, you have to figure out up-front how you will do the work. Not only does this take a lot of investment up front, it also does not account that the work you do later could change based on the work you do now. The estimates assume that nothing will change in the effort of doing the work, which is definitely not true in the Agile world. What I like about Story points is the focus is on the relative size of "things", then how they will be accomplished. As Mike would say, "Estimate size now, derive duration later".

Even though I am sold on story points, I have had much difficulty selling the concept and value to others. I'm having an easier time with the development team, but really struggling with stakeholders/product owners. These groups always want to know when things will get done as soon as possible. Story points in itself don't give you the answer, however they help determine velocity of the team which eventually will help you better predict what could be part of a release. For various reasons, it is too much of a paradigm shift for some to accept an estimate that isn't time based, yet they have much comfort of the team throws out some arbitrary number early on a release without much knowledge of how events are going to unfold.

It looks like I'm not the only one dealing with this struggle. Tobias Meyer, on his blog Agile Thoughts, addresses some of the arguments others have had about using Story Points as estimates. Following are the arguments, read more on his post "Estimation: Time or Size" on how he address each arguments. Good stuff!

Argument 1: Estimating in hours allows a developer to measure his estimate against his actual, and use that data to improve his estimates in future.

Argument 2: Our customers/product owners don’t understand story points; they need to know how many hours developers are working so they know how much the work will cost.

Argument 3: Story Points will map to time anyway, very soon we’ll see that (e.g.) one story point is worth 2.5 hours, so it is better to skip the intermediate step and just measure in hours.

Argument 4: Story Points don’t allow you to improve your estimation techniques.

Thursday, May 17, 2007

Agile or agile, Lean or lean?

Recently, I have seen a trend where other bloggers and authors are having difficulty or are trying to define the difference in using Agile (capital-A) vs. agile and Lean vs. lean. Some would say that by putting a capital in front of it that it means you are focusing on a particular methodology or defined Agile process. So if you are referring to something like XP or Scrum, you should use Agile. Otherwise, if you are talking about principles and particular practices, you should use agile. Others don't want to put a label on "Lean" or "Agile" because they are afraid I guess of watering down or maybe commercializing those terms? Other people would believe that you should use capitals in some situations and not in others.

Frankly, I just don't get it. All this brings is more confusion to the community. To me, putting a capital on Lean and Agile allows my readers to know that I am talking about the values, principles, practices, tools, measures, etc of a way to develop software differently. By not putting the capitals, I don't want to confuse people with the generic terms of lean and agile which are:

lean

1. To hang outwards.
2. To press against.
3. (of a person) slim not fleshy or having little fat
4. (of meat) having little fat
4. Having little extra or little to spare; as a lean budget
5. (of a fuel-air mixture) having more air than is necessary to burn all of the fuel; more air- or oxygen- rich than necessary for a stoichiometric reaction

agile

1. Having the faculty of quick motion in the limbs; apt or ready to move; nimble; active; as, an agile boy; an agile tongue.

Agile and Lean are states of mind, are a way to develop software, have common values and principles but many practices and tools to implement. agile and lean? They don't tell me much. I'm going to use capitals!

Tuesday, May 15, 2007

Addressing Technical Debt

What is Technical Debt and how to you manage it? These were two questions that is addressed over at the Lean Software Engineering blog.

In the first post, called "7 sources of technical debt" Bernie creates his list of where technical debt comes from:

1. Undiscovered bugs, failing tests, and open bugs in your database
2. Missing test automation (unit, feature, scenario, or system)
3. Missing build and deployment automation
4. Scenarios with incomplete user experiences
5. Code that’s too difficult to understand
6. Code that’s too difficult to extend
7. Code that’s isolated in branches


In the follow up post, called "7 strategies for measuring and tackling technical debt" Bernie provides his list of potential remedies to reduce or eliminate this debt:

1. Bugs. Measure and shrink your cycle time from when code is first written until it is tested (TDD is perfect for this, of course). Treat open bugs as work in progress (WIP) — and too much work in progress is evil. Constantly measure and manage WIP down with strategies like bug caps (no new features until existing bugs get below a defined bar).

2. Missing tests. Measure code coverage. Have a clear definition of “done” at each scale of your product: unit, feature, scenario, system. Treat units of code as incomplete until code, tests, and documentation are delivered as a set.

3. Missing automation. Make build and deployment first-class features of your project. Deliver them early, and assume (like other features) that they will constantly evolve with sub-features over the course of the project. Measure time-to-build, time-to-deploy, and time-to-test — including human time expended each cycle — and then drive those numbers down by adding automation work to your backlog.

4. Incomplete scenarios. Assign scenario ownership roles among the team. Have a clear definition of “done” for each scenario. Pre-release working scenarios to select customers as soon as possible, making special note of these bugs. And of course, minimize the number of scenarios each team focuses on at once (one at a time, if possible).

5. Not understandable. Like most open source projects, generate docs directly from the code (literate programming), including why the code does what it does. Systematically conduct code reviews. If the team can’t afford to review every line, always review a sample set of functions from each developer and component. Ask reviewers to rate code for understandability, track this data in code comments or a spreadsheet, and add workitems to the backlog to clean up those most needing improvement.

6. Not extensible. Encourage design pattern thinking. Buy your team a library of design patterns books and posters to help form that common vocabulary. Look at how open source projects tend to build big things out of many small, independent pieces — treat every component as a library with its own API and a clear (and minimized) set of dependencies.

7. Not integrated. Minimize the number of branched codelines, whether they’re formally in source code control, done on the side for customers, or sitting on developer’s disk drives. Unintegrated code is debt — it has work and surprises waiting to happen. Strive for continuous integration. This isn’t to say don’t use branching — even the smallest project should think about their mainline model — but always keep everything in source control, then track and minimize the number of active branches.


This is by no means an extensive list as I'm sure there are other contributors of debt and possible solutions. However, I agree with Bernie that these are the most common things to be on the lookout for. Just think how better the quality of software, and the ability of a team to deliver new features can be maximized if teams would see things in terms of debt!

Read more in both of these posts.

Monday, May 14, 2007

Participate in the making of a new book on Agile

James Shore, a local consultant here in Portland along with Shane Warden, are writing a book called The Art of Agile Development to be published later this year. If you want to get a glimpse of what the book will be about, they have a working draft on Jim's website.

Here's a portion of the preface to give you an idea of what the book is about:

We want to help you master the art of agile development.

Agile development, like any approach to team-based software development, is a fundamentally human art, one subject to the vagaries of individuals and their interactions. To master agile development, you must learn to evaluate myriad possibilities, moment to moment, and intuitively pick the best of course of action.

How can you possibly learn such a difficult skill? Practice!

Most of this book is an étude. An étude is a piece of music that's also a teaching tool. Études help the artist learn difficult technical skills. The best études are also musically beautiful.

Our agile étude serves two purposes. First and foremost, it's a detailed description of one way to practice agile development. It's a practical guide that, if followed mindfully, will allow you to successful bring agile development to your team—or help you decide that it isn't a good choice in your situation.

Our second purpose is to help you master the art of agile development. Team software development is too nuanced for any book to tell you how to master it. Mastery comes from within: from experience and an intuitive understanding of ripples caused by the pebble of a choice.

We can't teach you how your choices will ripple throughout your organization. We don't try. You must provide the nuance and understanding. This is the only way to master the art. Follow the practices. Watch what happens. Think about why they worked... or didn't work. Then do them again. Watch what happens. What was the same? What was different? Why? Then do it again. And again. Watch what happens each time.

At first, you may struggle to understand how to do each practice. They will look easy on paper, but putting each practice into action will sometimes be difficult. Keep practicing until it's easy.

As it gets easier, you will discover that some of our rules don't work for you. In the beginning, you won't be able to tell if the problem is in our rules or in the way you're following them. Keep practicing until you're certain. Then break the rules.

Parts I and II of this book contain our agile étude. Part I helps you get started; Part II provides detailed guidance for each practice. Parts I and II should keep you occupied for many months.

When you're ready to break the rules, turn to Part III. A word of warning: there is nothing in Part III that will help you practice agile development. Instead, it's full of ideas that will help you break rules.

One day you'll discover that rules no longer hold any interest for you. After all, agile development isn't about following rules. "It's about simplicity, and feedback... communication and trust," you'll think. "It's about delivering value—and having the courage to do the right thing at the right time." You'll intuitively evaluate myriad possibilities, moment to moment, and pick the best course of action.

When you do, pass this book on to someone else, dogeared and ragged though it may be, so that they too may master the art of agile development.


But just don't take a look at this draft of the book, help the authors out! You can participate in the review process by joining the art-of-agile mailing list! I'm sure that they will appreciate the help and you can feel good in contributing back to the Agile community!

Thursday, May 10, 2007

Video: Agile Testing

Here is another great video. This time, the topic is Agile Testing. I met the speaker, Elisabeth Hendrickson, a few months ago at Agile Open Northwest here in Portland. She led a couple of sessions, and they were some of the best attended and interesting ones of the bunch. This lady has ENERGY and a PASSION (BIG CAPS HERE) about testing. Before meeting her, I didn't know somebody could care so much about testing software. This video is from December 2006, but is still very relevant. If you are transitioning to Agile and have been used to having a separate testing group, this video is for you. If you are still struggling with how testing should be done on an Agile project, you will get something of value. Elisabeth does a great job helping understand the shift that needs to take place to do testing differently within the context of Agile. Here is the video:

Wednesday, May 9, 2007

Changing the way you do long term planning

We are in the process of implementing a pull scheduling system for each of our product development teams. No longer do we think in terms of projects, but in terms of releases and iterations. No longer do we figure out the entire scope in detail of what will be contained in the product, but we determine the details when we are ready in a particular iteration in a release. No longer do we have rigid control of scope for the entire project, but we try to contain what we do in a 2-week iteration once it has been planned and committed by the team. No longer do we try to figure out exactly what we will do in the next 1-2 years, but we "pull" enough work to determine the next release when we are ready to think about it. No longer do we wait for customer feedback only at the end of the project through alpha and beta cycles, but we get internal and external feedback through demos at the end of each iteration. No longer do we conduct vague and long weekly project status meetings, but we conduct quick and concise daily standup meetings. Our world has definitely changed, but in the minds of our executive stakeholders they have struggled a little with it. In particular, if you are only planning within short iterations and the current release cycle, how do you do long-term product planning for customers that want to have some idea of a delivery date for something "down the road"?

I have no problem in planning long-term if the planning is focused more on priorities than dates. Though the team is pulling just enough work for the next release, the more work that is prioritized and thought out (at least to a high level) will make the release and iteration planning go much smoother. And, actually I don't even have a problem committing to a particular date for the next release or any iteration in the release because those are fixed. However, committing to particular scope outside of the current iteration is something that I find challenging. Why?

Because we know only look at the details of the release within the current iteration, we haven't given enough thought about the remaining stories in the release. Sure, we have done some high-level estimates but we have learned that what you think you are going to do after thinking about it for a few minutes is much different when you think about it for a few hours in iteration planning. Until you really start looking at particular tasks and tests that the team will do in fulfilling each story, you don't really have an accurate estimate. And even with that said, you may find surprises within the iteration after planning.

The other challenge is that you are dealing with change in a whole different way. When you find a defect, you don't "log" it in a repository automatically because you don't have the time to fix it during the project. Instead, you deal with defects as they happen. When you show functionality in a demo, you don't log future enhancements automatically to be considered in future projects but you make appropriate changes if they are valuable and while they are fresh in the team's minds. When new issues come up, you create new stories to deal with them if they are deemed the right time. All of these things change scope, which changes what is done in a particular release. This changes the expected delivery dates many times. Which makes it very difficult to predict the scope of a release, and makes it almost impossible to predict when new functionality will be available in 12-18 months.

Now, this may sound bad initially to your stakeholders, because they want to make promises to customers as soon as possible. So, this may sound like Agile and Lean is the wrong way to go right? While I can't predict what will be done in 18 months, I can show you progress every two weeks. This progress will have higher quality because it has been tested and could be released to customers if desired. What we work on has been determined the most important to our customers, so this is a good thing. The best thing of all? The fact that we make sure that every deliverable, minimal useful functionality, really is what the customer wants because they have been more involved more frequently. So does it really matter if we try to fix scope, resources and schedule but having such as strict change control process if in the end we don't deliver what is really wanted?

So, what I have agreed on with the stakeholders is that we will go out beyond a release but strictly from a planning and goals perspective. At the point where it is determined that the goals are not obtainable in the process, I will make sure that the stakeholders are notified and can adjust the scope or define new goals based on this information. What's interesting is that I will never have to have that kind of conversation because the stakeholders will be involved in every part of the process. They will see progress being made, they will set priorities of work, they will attend demos and make changes, and they will see the impact of those changes. In the end, they will learn that they can draw the "line in the sand" when they feel most appropriate. Based on the confidence that they are gaining with the teams, they will change the long-term discussion with the customer to what is being developed than what is being planned. When that isn't enough, they can give the customer an idea of the priority of work being considered. If the customer feels that the priority isn't meeting their needs, they can voice their opinion. Then the stakeholders can determine what to do with that feedback and make priority changes as necessary. As long as customers see steady progress with frequent releases that are meeting needs, they should be satisfied with this approach.

Tuesday, May 8, 2007

Video: Agile Estimating and Planning

I guess this week is the week of video postings, as I have discovered some more great ones to share with you. Today's is from Mike Cohn. I went to a seminar a couple of years ago on the same topic and presenter as this video. There is great content here, especially if your team is struggling with estimating and don't know about story points and planning poker concepts. Mike's presentation style is also engaging and entertaining. This presentation is is two parts, click the videos below to watch each part.

Part One



Part Two

Monday, May 7, 2007

Video: Scrum at Google

Jeff Sutherland provides an excellent video discussing how the Google AdWords team adopted and adapted Scrum to fit their team and the Google environment. As my organization has been going through adoption of Agile, I saw similar "lessons learned" as described in the video. I think there is a lot of valuable knowledge to be gained listening to Jeff. Enjoy!

Friday, May 4, 2007

Another View of Lean Leadership

Joe Little from Agile and Business has an excellent post today called "Leaders, Managers, Bosses and Administrators". Actually, he focuses on what leadership means in the Lean and Agile world. He refers to the Poppendieck book, "Implementing Lean Software Development: From Concept to Cash" (If I had to pick one book to understand Lean as applied to software, this would be the one!). Here's his take on leadership after reading the book:

They [Poppendiecks] raise several excellent points.

1. A team needs leadership. Which is to say, vision. Someone to inspire and someone to help them put their hearts in the game. And keep it there.

2. The project needs decisiveness. If the team has too many leaders, and the leaders squabble about decisions, then the team wastes time. The team needs to know how it will make decisions. There is a trade-off between making the right decisions, and making decisions quickly.

3. Generally, the team needs to learn to make decisions only at the last responsible moment. So, much of the decision-making is about when to make a decision. At what point have you learned enough to make a good/better decision?

4. The project needs business decisions and technical decisions. This is very true. So the team needs business people and needs technical people who are ready and able to make those decisions. And, preferrably, business people who understand technology and technology people who understand business (and the customers).

5. And there are many other types of decisions to be made too. People decisions. Decisions about whose insights to go with on specific areas. Process decisions. Decisions about who is working effectively and who is not. Decisions about how to get the team to communicate better and learn faster.

6. In Scrum, we have ScrumMasters and Product Owners. These roles are endowed with certain leadership aspects. This is different than the leadership of the Chief Engineer, which is a role Toyota uses.

7. Project managers have also provided leadership. (And we have the whole PMP, PMI thing, too.) PMs have also provided managership, bossiness, administration and other things.

8. We know that no bosses are wanted. We want all the best from every person, and a boss will only inhibit that. A boss wants to command others, and thinks he knows all. These are not helpful traits in a learning situation. (So, semantically, we are using "boss" here to represent all the bad things that a boss can be. Of course, few managers or leaders are as bad as a boss, but we all can be that way sometimes.)


Read more in Joe's post.

Wednesday, May 2, 2007

The Agile Team's Dashboard

Last summer, I shopped for a new car. I decided with rising gas prices to purchase a Hybrid. My goal in buying this car was to improve my miles per gallon (MPG) so I didn't have to spend as much as often at the pump. With this car, I have two gauges that I didn't have before. The first is next to my speedometer and shows me my MPG consumption (or electric if no gas is being used). The other gauge will show me the Tank average for MPG.

At first, I wasn't used to these measures and didn't pay much attention to them. Therefore, my driving habits were similar to what I had done before with my other car. I drove the car as I had driven the previous one. As I got familiar with my car, I started to explore both of these gauges. I started experimenting with how to increase my MPG usage. I discovered that if I accelerated a little slower after being stopped it made a difference. I discovered that I could coast more often when going downhill instead of hitting the gas and still maintain the same speed (but save on MPG usage). As I result, I started to see the MPG increase.

Then, I decided to not pay attention to either gauge for awhile. Just try to drive the car using what I have learned and see if I got the same results. Though I did better than when I first started using the car, I still was getting lower MPG by not paying attention to these gauges. These gauges were a good visual tool to help me accomplish my goal of higher MPG, and were a good reminder when my bad habits would come back and cause my MPG to drop.

So, why all of this talk about cars? Well, think about Agile teams. Their goal is develop as much working (deliverable, quality, meets customer needs) software as possible in set increments of time. How do they measure this? There are two gauges that most Agile teams use to ensure they focus on the goal of delivering working software.

First, the team calculates their velocity from Iteration to Iteration. Velocity measures the total amount of work that was accomplished to deliver working, completed stories by the end of the iteration. If a story is partially completed, the team does not get credit until the iteration in which the story has been completed. This encourages teams to focus on completing stories than completed tasks leaving incomplete stories. If the team is focusing on the completion of stories, as is a dedicated team of individuals, you should be able to sustain a certain range of velocity from iteration to iteration. Where Velocity gets lower, you should see why things are getting done. Where Velocity is higher, you need to see why the team was able to accomplish more. If the team pays attention to Velocity, they will begin to see better development practices (much like my driving habits improved after paying attention to MPG).

Second, the team uses a chart to track progress at both a Release and Iteration Level. The burndown chart shows how work in being completed on a "trip" by "trip" basis. At a Release Level, the chart shows how each Iteration is progressing and the team can get a good idea of what amount of scope is possible based on this information. At an Iteration Level, the chart takes the daily remaining hours for the team to be used in the Daily Standup to see progress. If the chart isn't going down as quickly as expected, this is an indication that the team has impediments that must be addressed. These impediments could be thrashing (team isn't getting to a resolution), dependencies to other stories/tasks (team needs to remove dependencies where they can), additional scope (additional stories and/or tasks have been determined necessary to get complete). In any of these cases, the team can quickly see the impediments impact to the iteration and make adjustments.

Both Velocity and Burndown charts are good gauges to ensure that you are doing what you can to deliver working software. These measures should expose you to some problems (waste in Lean thinking) that need to be resolved. Most of the time this requires a change in how you do things in order to accomplish the goal better. If Agile teams aren't using these gauges, much like my driving, they can get back into bad habits and find themselves not having great results after each Iteration.