INVEST in Good Stories, and SMART Tasks

INVEST in Good Stories, and SMART Tasks

XP teams have to manage stories and tasks. The INVEST and SMART acronyms can remind teams of the good characteristics of each.

In XP, we think of requirements of coming in the form of user stories. It would be easy to mistake the story card for the "whole story," but Ron Jeffries points out that stories in XP have three components: Cards (their physical medium), Conversation (the discussion surrounding them), and Confirmation (tests that verify them).

pidgin language is a simplified language, usually used for trade, that allows people who can't communicate in their native language to nonetheless work together. User stories act like this. We don't expect customers or users to view the system the same way that programmers do; stories act as a pidgin language where both sides can agree enough to work together effectively.

But what are characteristics of a good story? The acronym "INVEST" can remind you that good stories are:

  • I - Independent
  • N - Negotiable
  • V - Valuable
  • E - Estimable
  • S - Small
  • T - Testable

Independent

Stories are easiest to work with if they are independent. That is, we'd like them to not overlap in concept, and we'd like to be able to schedule and implement them in any order.

We can't always achieve this; once in a while we may say things like "3 points for the first report, then 1 point for each of the others."

Negotiable... and Negotiated

A good story is negotiable. It is not an explicit contract for features; rather, details will be co-created by the customer and programmer during development. A good story captures the essence, not the details. Over time, the card may acquire notes, test ideas, and so on, but we don't need these to prioritize or schedule stories.

Valuable

A story needs to be valuable. We don't care about value to just anybody; it needs to be valuable to the customer. Developers may have (legitimate) concerns, but these framed in a way that makes the customer perceive them as important.

This is especially an issue when splitting stories. Think of a whole story as a multi-layer cake, e.g., a network layer, a persistence layer, a logic layer, and a presentation layer. When we split a story, we're serving up only part of that cake. We want to give the customer the essence of the whole cake, and the best way is to slice vertically through the layers. Developers often have an inclination to work on only one layer at a time (and get it "right"); but a full database layer (for example) has little value to the customer if there's no presentation layer.

Making each slice valuable to the customer supports XP's pay-as-you-go attitude toward infrastructure.

Estimable

A good story can be estimated. We don't need an exact estimate, but just enough to help the customer rank and schedule the story's implementation. Being estimable is partly a function of being negotiated, as it's hard to estimate a story we don't understand. It is also a function of size: bigger stories are harder to estimate. Finally, it's a function of the team: what's easy to estimate will vary depending on the team's experience. (Sometimes a team may have to split a story into a (time-boxed) "spike" that will give the team enough information to make a decent estimate, and the rest of the story that will actually implement the desired feature.)

Small

Good stories tend to be small. Stories typically represent at most a few person-weeks worth of work. (Some teams restrict them to a few person-days of work.) Above this size, and it seems to be too hard to know what's in the story's scope. Saying, "it would take me more than month" often implicitly adds, "as I don't understand what-all it would entail." Smaller stories tend to get more accurate estimates.

Story descriptions can be small too (and putting them on an index card helps make that happen). Alistair Cockburn described the cards as tokens promising a future conversation. Remember, the details can be elaborated through conversations with the customer.

Testable

A good story is testable. Writing a story card carries an implicit promise: "I understand what I want well enough that I could write a test for it." Several teams have reported that by requiring customer tests before implementing a story, the team is more productive. "Testability" has always been a characteristic of good requirements; actually writing the tests early helps us know whether this goal is met.

If a customer doesn't know how to test something, this may indicate that the story isn't clear enough, or that it doesn't reflect something valuable to them, or that the customer just needs help in testing.

A team can treat non-functional requirements (such as performance and usability) as things that need to be tested. Figure out how to operationalize these tests will help the team learn the true needs.

 

For all these attributes, the feedback cycle of proposing, estimating, and implementing stories will help teach the team what it needs to know.

SMART Tasks

There is an acronym for creating effective goals: "SMART" -

  • S - Specific
  • M - Measurable
  • A - Achievable
  • R - Relevant
  • T - Time-boxed

(There are a lot of variations in what the letters stand for.) These are good characteristics for tasks as well.

Specific

A task needs to be specific enough that everyone can understand what's involved in it. This helps keep other tasks from overlapping, and helps people understand whether the tasks add up to the full story.

Measurable

The key measure is, "can we mark it as done?" The team needs to agree on what that means, but it should include "does what it is intended to," "tests are included," and "the code has been refactored."

Achievable

The task owner should expect to be able to achieve a task. XP teams have a rule that anybody can ask for help whenever they need it; this certainly includes ensuring that task owners are up to the job.

Relevant

Every task should be relevant, contributing to the story at hand. Stories are broken into tasks for the benefit of developers, but a customer should still be able to expect that every task can be explained and justified.

Time-Boxed

A task should be time-boxed: limited to a specific duration. This doesn't need to be a formal estimate in hours or days, but there should be an expectation so people know when they should seek help. If a task is harder than expected, the team needs to know it must split the task, change players, or do something to help the task (and story) get done.

Conclusion

As you discuss stories, write cards, and split stories, the INVEST acronym can help remind you of characteristics of good stories. When creating a task plan, applying the SMART acronym can improve your tasks.

Pairing Pattern: Ping Pong Pairing

Pairing Pattern: Ping Pong Pairing

One of the struggles people can have when they first start pairing, is understanding when it is time to drive, and when it is time to watch. Developing a good tempo to the act of pairing – and understanding when the change over should occur – can make it seem like a much more fluid activity. When it is working well, outsiders will see the keyboard moving backwards and forwards between the pairs (albeit perhaps slightly slower than a game of table tennis!).

If one pair member hogs the keyboard too much, the other member can feel that they are not properly involved with development. Depending on your development tools and build times, you may need to identify different points at which to pass control. The important thing is to ensure that both members of the pair get to feel equally involved in the development. Set yourselves a target for the maximum duration for each member to have control of the keyboard – ten minutes seems a good target to aim for, but a shorter duration may work better for you.

Example – Test, Implement, Refactor, Switch

When using Test Driven Development, a good way to develop this tempo is to use the acts of writing a test and making it pass to define when to change over. I’ve seen success in having the person A write the test, then have person B get the test to pass and refactor, then write the next test before passing the keyboard back to person A.

Extreme Example – Chess Clocks

This example was related to me by a colleague. The team in question had chess clocks at each pairing station. The idea was that each member of the pair got to drive for four hours of the eight hour day. To keep track, at each switchover they’d click the chess clocks to start the other persons timer. If at the end of the day if you’d used up all your time, you had to watch. Very quickly each pair worked out a dynamic in which the time became equally distributed – I’d certainly of liked some video footage though!

Fluent Interface

I just read a good article by Martin Flower on "Fluent Interface". Here is the abstract...

A few months ago I attended a workshop with Eric Evans, and he talked about a certain style of interface which we decided to name a fluent interface. It's not a common style, but one we think should be better known. Probably the best way to describe it is by example.

The simplest example is probably from Eric's timeAndMoney library. To make a time interval in the usual way we might see something like this:

TimePoint fiveOClock, sixOClock;...TimeInterval meetingTime = new
TimeInterval(fiveOClock, sixOClock);

The timeAndMoney library user would do it this way:

TimeInterval meetingTime = fiveOClock.until(sixOClock);

I'll continue with the common example of making out an order for a customer. The order has line-items, with quantities and products. A line item can be skippable, meaning I'd prefer to deliver without this line item rather than delay the whole order. I can give the whole order a rush status.
The most common way I see this kind of thing built up is like this:

private void makeNormal(Customer customer)
{

Order o1 = new Order();

customer.addOrder(o1

OrderLine line1 = new OrderLine(6, Product.find("TAL

o1.addLine(line1

OrderLine line2 = new OrderLine(5, Product.find("HPK

o1.addLine(line2

OrderLine line3 = new OrderLine(3, Product.find("LGV

o1.addLine(line3

line2.setSkippable(true

o1.setRush(true);

}


In essence we create the various objects and wire them up together. If we can't set up everything in the constructor, then we need to make temporary variables to help us complete the wiring - this is particularly the case where you're adding items into collections.
Here's the same assembly done in a fluent style:

private void makeFluent(Customer customer)
{

customer.newOrder
.with(6, "TAL")
.with(5, "HPK").skippable()
.with(3, "LGV")
.priorityRush();

}


Probably the most important thing to notice about this style is that the intent is to do something along the lines of an internal Domain Specific Language. Indeed this is why we chose the term 'fluent' to describe it, in many ways the two terms are synonyms. The API is primarily designed to be readable and to flow. The price of this fluency is more effort, both in thinking and in the API construction itself. The simple API of constructor, setter, and addition methods is much easier to write. Coming up with a nice fluent API requires a good bit of thought.

Indeed one of the problems of this little example is that I just knocked it up in a Calgary coffee shop over breakfast. Good fluent APIs take a while to build. If you want a much more thought out example of a fluent API take a look at JMock. JMock, like any mocking library, needs to create complex specifications of behavior. There have been many mocking libraries built over the last few years, JMock's contains a very nice fluent API which flows very nicely. Here's an example expectation:

mock.expects(once()).method("m").with(
or(stringContains("hello"),
stringContains("howdy")) );

I saw Steve Freeman and Nat Price give an excellent talk at JAOO2005 on the evolution of the JMock API, they since wrote it up in an OOPSLA paper.


So far we've mostly seen fluent APIs to create configurations of objects, often involving value objects. I'm not sure if this is a defining characteristic, although I suspect there is something about them appearing in a declarative context. The key test of fluency, for us, is the Domain Specific Language quality. The more the use of the API has that language like flow, the more fluent it is.

Building a fluent API like this leads to some unusual API habits. One of the most obvious ones are setters that return a value. (In the order example with adds an order line to the order and returns the order.) The common convention in the curly brace world is that modifier methods are void, which I like because it follows the principle of CommandQuerySeparation. This convention does get in the way of a fluent interface, so I'm inclined to suspend the convention for this case.

You should choose your return type based on what you need to continue fluent action. JMock makes a big point of moving its types depending on what's likely to be needed next. One of the nice benefits of this style is that method completion (intellisense) helps tell you what to type next - rather like a wizard in the IDE. In general I find dynamic languages work better for DSLs since they tend to have a less cluttered syntax. Using method completion, however, is a plus for static languages.

One of the problems of methods in a fluent interface is that they don't make much sense on their own. Looking at a method browser of method by method documentation doesn't show much sense to with. Indeed sitting there on its own I'd argue that it's a badly named method that doesn't communicate its intent at all well. It's only in the context of the fluent action that it shows its strengths. One way around this may be to use builder objects that are only used in this context.

One thing that Eric mentioned was that so far he's used, and seen, fluent interfaces mostly around configurations of value objects. Value objects don't have domain-meaningful identity so you can make them and throw them away easily. So the fluency rides on making new values out of old values.

I haven't seen a lot of fluent interfaces out there yet, so I conclude that we don't know much about their strengths and weaknesses. So any exhortations to use them can only be preliminary - however I do think they are ripe for more experimentation.

Implementing Agile in Agile Manner

I was attending SCRUM meeting with my team and normally it is followed by a 5 min TIDBIT session by anyone on a round robin basis. One of my team member raised a very good point, he was giving example of how we have successfully implemented Agile in our team. And he pointed out that we have brought in incremental changes. That means we have implemented Agile in the Agile way. We didn't rushed into things we took our time and we moved slowly bringing changes with each iteration.

Here in this article I will explain the manner in which we adopted things and it worked :). We have done in slow steps, the first one was adopting the basics. We started with :
1. SCRUM meetings: We didn't used the white board at this point, but we concentrated on sharing work details with each other (What we did in last 24 hours, What are our plans for next 24 hours, and roadblocks in case we have not achieved the promised objectives)
2. Pair Programming: We started with the practise of Pair Programming and Sharing sessions.

Once these resulted in better trust and self accountability, we moved to next step and introduce some XP practices like:
3. TDD: Test Driven Development
4. Testing Automation: We introduce tools like Watij (Web Application Testing In Java)
5. Continuous Builds: This resulted in tight integration. We used a software called CruiseControl for this purpose.

As and when we started getting results out of it, we moved one more step. Here we introduced
6. Simple Sprint & Product Backlogs: To track all the user stories we were working on.
7. TidBit sessions: 5 Minute session immediately after SCRUM to introduce team sharing.

This was enormous and by this time team has already started feeling the Agile. We took one more step:
8. Improved our Product & Sprint: Improved format to generate Velocity, Put up some Definitions of Done :0).
9. Iteration Plan Meeting: To ensure that we are carefully planning our activities.
10. Sprint Demo: Where developers used to demonstrate the work they have done during the spring in a already agreed upon method.
11. Retrospective Meeting: Meeting that starts after our Spring Demo to find out the good things we did in this Sprint and should be continued and the things that should be removed from the Sprint immediately.

That's it. We do make small changes to these process which we follow. But overall we implemented Agile in Agile manner. :)

Scrum Unplugged

What is Scrum?

A variation on Sashimi, an "all-at-once" approach to software engineering. Both Scrum and Sashimi are suited best to new product development rather than extended development. Sashimi originated with the Japanese and their experiences with the Waterfall model. They had the same problems with the Waterfall model as everybody else, so they adapted it to suit their own style. Realizing that speed and flexibility are as important as high quality and low cost they reduced the number of phases to four -- requirements, design, prototype, and acceptance -- without removing any activities, which resulted in overlap of the Waterfall phases. Then they made the four phases overlap. (Sashimi is a way of presenting sliced raw fish where each slice rests partially on the slice before it). Other companies took Sashimi one step further, reducing
the phases to one and calling it Scrum. (A scrum is a team pack in Rugby, everybody in the pack acts together with everyone else to move the ball down the field).

Applying Scrum

For each Waterfall phase there are a pool of experienced people available, form a team by selecting one person from each pool. Call a team meeting and tell them that they have been selected to do an important project. Describe the project, include how long it's estimated to take, how much it is estimated to cost, how it is expected to perform, etc. Now tell them that their job is to do it in half the time, with half the cost, twice the performance, etc. Tell them how it's done is up to them and explain that your job is to support them with resources. Now leave.
Stand by, give advice if it's requested, and wait. Don't be surprised if a team member thinks the whole thing is insane and leaves. You'll get regular reports, but mostly you'll just wait. At somewhere around the expected time, the team will produce the system with the expected performance and cost.

How does Scrum work?

The first thing that happens is the initial leader will become primarily a reporter. The leadership role will bounce around within the team based on the task at hand. Soon QA developers will be learning how requirements are done and will be actively contributing, and requirements people will be seeing things from a QA point of view. As work is done in each of the phases, all the team learns and contributes, no work is done alone, the team is behind everything. From the initial meeting, the finished product is being developed. Someone can be writing code, working on functional specifications, and designing during the same day, i.e. "all-at-once". Don't be surprised if the team cleans the slate numerous times, many new ways will be picked up and many old ways discarded. The team will become autonomous, and will tend to transcend the initial goals, striving for excellence. The people on the team will become committed to accomplish the goal and some members may experience emotional pain when the project is completed.

Why does Scrum Work?

The basic premise is that if you are committed to the team and the project, and if your boss really trusts you, then you can spend time being productive instead of justifying your work. This reduces the need for meetings, reporting and authorization. There is control, but it is subtle and mostly indirect. It is exercised by selecting the right people, creating an open work environment, encouraging feedback, establishing an evaluation and reward program based on group performance, managing the tendency to go off in different directions early on, and tolerating mistakes. Every person on the team starts with an understanding of the problem, associates it with a range of solutions experienced and studied, then using skill, intelligence, and experience, will narrow the range to one or a few options.

Keep in mind that it can be difficult to give up the control that it takes to support the Scrum methodology. The approach is risky, there is no guarantee that the team will not run up against real limits, which could kill the project. The disappointment of the failure could adversely affect the team members because of the high levels of personal commitment involved. Each person on the team is required to understand all of the problem and all of the steps in developing a system to solve it, this may limit the size of the system developed using the methodology.

Planning Poker (Time Estimations)

Time estimating using planning poker


Estimating is a team activity - every team member is usually involved in estimating every story. Why?
  • As per Agile at the time of planning, we normally don’t know exactly who will be implementing which parts of which stories. Stories normally involve several people and different types of expertise (user interface design, coding, testing, etc).
  • In order to provide an estimate, a team member needs some kind of understanding of what the story is about. By asking everybody to estimate each item, we make sure that each team member understands what each item is about. This increases the likelihood that team members will help each other out during the sprint. This also increases the likelihood that important questions about the story come up early.
  • When asking everybody to estimate a story we often discover discrepancies where two different team members have wildly different estimates for the same story. That kind of stuff is better to discover and discuss earlier than later.

If you ask the team to provide an estimate, normally the person who understands the story best will be the first one to blurt one out. Unfortunately, this will strongly affect everybody else’s estimates.

There is an excellent technique to avoid this – it is called planning poker (coined by Mike Cohn I think).


Each team member gets a deck of 13 cards as shown above. Whenever a story is to be estimated, each team member selects a card that represents his time estimate (in story points) and places it face-down on the table. When all team members are done the cards on the table are revealed simultaneously. That way each team member is forced to think for himself rather than lean on somebody else’s estimate.

If there is a large discrepancy between two estimates, the team discusses the differences and tries to build a common picture of what work is involved in the story. They might do some kind of task breakdown. Afterwards, the team estimates again. This loop is repeated until the time estimates converge, i.e. all estimates are approximately the same for that story.

It is important to remind team members that they are to estimate the total amount of work involved in the story. Not just “their” part of the work.The tester should not just estimate the amount of testing work.

Note that the number sequence is non-linear. For example there is nothing between 40 and 100. Why?

This is to avoid a false sense of accuracy for large time estimates. If a story is estimated at approximately 20 story points, it is not relevant to discuss whether it should be 20 or 18 or 21. All we know is that it is a large story and that it is hard to estimate. So 20 is our ballpark guess.Want more detailed estimates? Split the story into smaller stories and estimate the smaller stories instead!

And NO, you can’t cheat by combining a 5 and a 2 to make a 7. You have to choose either 5 or 8, there is no 7.

Some special cards to note:



0 = “this story is already done” or “this story is pretty much nothing, just a few minutes of work”.

? = “I have absolutely no idea at all. None.”

å = “I’m too tired to think. Let’s take a short break.”

Extreme Programming Core Practices

The 12 “XP Xtudes” (Xtude is XP means ‘Attitude’) of Extreme Programming (XP) grouped into four categories

1. Fine Scale feedback

XP thrives on providing feedback at smaller intervals with higher frequency. This allows controlling deviation at the right time, since in software or any othe industry for that matter, once deviation starts happening it is dificult to control at the later stages.

  • Test Driven Development via Programmer Tests (Unit Tests) and Customer Tests (Acceptance Tests/Automation Tests)
  • Planning Game (Definition Iteration Objectives and playfield etc)
  • Whole Team (Onsite Customer + Programmer + Quality Team + Customer Team + Scrum Master + Product Owner)
  • Pair Programming ( two engineers participate in one development effort at one workstation)

2. Continuous Process rather than Batch

3. Shared understanding

4. Programmer welfare

YAGNI

YAGNI perfectly sums up XP (Xtreme Programming). Here is why and how.

"You Arent Gonna Need It" (often abbreviated YAGNI) is an Extreme Programming (XP)practice which states:

"Always implement things when you actually need them, never when you just foresee that you need them."
Even if you're totally, totally, totally sure that you'll need a feature later on, don't implement it now. Usually, it'll turn out either
  1. You don't need it after all, or
  2. What you actually need is quite different from what you foresaw needing earlier.

This doesn't mean you should avoid building flexibility into your code. It means you shouldn't overengineer something based on what you think you might need later on. There are two main reasons to practise YagNi:

  • You save time, because you avoid writing code that you turn out not to need.
  • Your code is better, because you avoid polluting it with 'guesses' that turn out to be more or less wrong but stick around anyway.

A scenario that explains the practices:




You're working on some class. You have just added some functionality that you need. You realize that you are going to need some other bit of functionality. If you don't need it now, don't add it now. Why not?

"OK, Mohan, why do you want to add it now?"
"Well, Rahul, it will save time later."
But unless your universe is very different from mine, you can't 'save' time by doing the work now, unless it will take more time to do it later than it will to do now. So you are saying:

"We will be able to do less work overall, at the cost of doing more work now."
But unless your project is very different from mine, you already have too much to do right now. Doing more now is a very bad thing when you already have too much to do.

And unless your mind is very different from mine, there is a high chance that you won't need it after all, or that you'll need to rewrite or fix it once you do need it. If either of these happens, not only will you waste time overall, you will prevent yourself from adding things that you do need right now.

"But Rahul, I know how to do it right now, and later I might not."
"So, Mohan, you're telling me that this class you're writing is so complex that even you won't be able to maintain it?"
Keep it simple. If you need it, you can put it in later. If you don't need it, you won't have to do the work at all. Take that day off.

YAGNI in the context of the other Extreme Programming practices

You have a Release Plan: each User Story has been assigned to an Iteration where it will be done. Under the current Iteration Plan, you are working on an Engineering Task that you signed up for, in support of one of the Iteration's User Stories. As always, you have signed up for as much Ideal Programming Time as your Load Factor indicates you can accomplish.

You are evolving the system to have the new functionality required by the User Story, defined in the Engineering Task. You add capability to any class we need to, directly growing from the requirement. If you find yourself writing duplicate code, you refactor to eliminate it, even (perhaps) adding an abstract class, or making a subclass, etc. You and your co-programmers always keep the code clean.

You're building a class, and suddenly you get an idea for a feature you could add to it. You don't need it right now, but "Someday we're gonna need ...", you say to yourself.

Keep in mind that you are employing other Extreme Programming practices that allow you to deal with the future when it happens. Collective Code Ownership allows you to change anybody else's code to give it the functionality you want. Refactor Mercilessly and Once And Only Once make it easier to understand the best way to add your functionality. Unit Tests help ensure that your added functionality won't break any past functionality. So if you do need to implement this feature in the future, it probably won't be much harder than it would be to implement now.

At this moment, you have a choice: continue working on what you signed up to do, or begin working on something you didn't sign up to do, and that isn't needed in this Iteration.

Therefore, tell yourself YAGNI. Set aside your thoughts and fears about tomorrow and get back to work on today. Without a clear use for the feature, you don't know enough about what is really needed. Spending time on it is speculative at best.

One Responsibility Rule

From Bertrand Meyer's “Object Oriented Software Construction”, there was the statement:

A class has a single responsibility: it does it all, does it well, and does it only.
Classes, interfaces, functions, etc. all become large and bloated when they're trying to do too many things.

When a function has too many responsibilities, it becomes buried deep in "Special Formatting", which has a "Code Smell".
To avoid bloat and confusion, and ensure that code is truly simple (not just quick to hack out) we have to practice "Code Normalization", which seems to be a variation on “Once And Only Once” and also “Do The Simplest Thing That Could Possibly Work”.
This is part of “Responsibility Driven Design”.

Special Formatting :- Sometimes in a method you wind up wanting special formatting. We take it as a sign, instead, that you should rewrite the method. It could refer to the situations where you would want to indent the statements to focus / highlight some important logic.
Code Smell :- A code smell is a hint that something has gone wrong somewhere in your code.
Code Normalization :- The attributes and operations of a class should depend on the responsibility, the whole responsibility, and nothing but the responsibility. If this can't be made to work, the class has lower cohesion, and might be a candidate for re factoring.
Once and Only Once :- One of the main goals (if not the main goal) when Re factoring code is that each and every declaration of behaviour should appear “Once And Only Once”.
Do The Simplest Thing That Could Possibly Work :- "All that is complex is not useful. All that is useful is simple." – Mikhail Kalashnikov
Responsibility Driven Design :- Design based on the principle that some class has to be responsible for each task that the system will carry out. The responsible class may collaborate with other classes to carry out its task. (It's probably a good idea if each responsibility occurs Once And Only Once.)
Lazy Programmer

This idiom is related to the Albert Einstein's Principle “A scientific theory should be as simple as possible, but no simpler”.

It can be worded as "Do as little work as possible to get the task completed, but no less."
So, complete the task by trying to write the minimum amount of code that satisfies the requirements and passes all the tests. That means not falling prey to the “Not Invented Here”* syndrome and instead embracing the work of others to lessen the amount of work you have to do. It also means embracing standards so that there is more potential to leverage others work.


In a different vein, this can mean a programmer that practices “Copy And Paste Programming” and “Someone Else’s Example” programming. The programmer does not take the time to understand the code, but simply forces pieces together with minimum effort.

Anything more than cursory testing can be ruled out altogether.

* Any code Not Invented Here is not as good as code that I (or we) wrote
How to best use product backlogs
Product backlog is normally the starting point, it is where AGILE's heart lie. It is the first step in starting an AGILE based software development. It is nothing but a sorted list of requirements/feature requests/user stories or what ever term you use for high level tasks based on importance of the items in it. The best part is that the items are described in Customer's language which he understand basically the one which reflect the business benefit for him. Lets call these items user stories or backlog items.

Fields associated with each story

SID – Story Identification Number, a unique identification may be you can use an auto-incremented number. This is to keep track of stories even if we change their name.
Name – It is a short descriptive name of the story. Length should not be longer than 4-12 words. e.g. “Generate the employee attendance report”. It should be small, crisp, distinguishable and communicative so that stake holders knows what we want system to do. Stake holders include developers, testers, product owner.
Importance – It is the importance rating given by the product owner. There are many theories circulate around what should we use as the rating number. some suggest using P1 to P5, other says 1 to 10. I have read an interesting article which says we can use some large numbers like 21 or 135 etc, where higher the number more it has the importance. The motive behind this is to have unique importance number between stories. Second logic which is supporting it says that lets say you rated one story as P1 or 1 what will you do if a new story comes up between the iteration which is higher in importance that this one. Would you rate it as -1 or 0.5? Secondly with using high numbers we should leave some gap between importance numbers like story 1 may have importance rating of 10 and story 2 may have 9. This way is a new story comes up which is high in importance than story 1 but less in importance than story 2 we can accommodate it in between without using making sheet look ugly by making it as 2.5 or fractions. Nice thought :).
estimate – the first assessment of how much work is needed to complete this story. The unit of storage can be of your choice, normally we use a terminology called story points. The story points are usually corresponds roughly to “ideal man-days”.

  • You can ask the team with a optimum number of people usually 2, in how many days you can finish a complete demonstrable & releasable version of this story. If the answer comes like "with 2 guys we will take approximately 3 working days" then the initial estimate is 3 X 2 = 6 story points.
  • The concentration is not on getting every things absolutely right in the first place but the concentration is to check the relative estimates that means that a 4-point story takes about double the effort as compared to a 2-point story.
How to demo – This is basically a high level description of how this story will be demonstrated at the Sprint Demo. This is nothing but a simple test spec. “If you do this, and then do that, then you can expect this to happen”.

(Note: If you practice TDD (test case-driven development) this description can be used as pseudo-code for your acceptance test code. )

Notes – any other info, clarifications, references to other sources of info, etc. Normally very brief.

PRODUCT BACKLOG (example)



SIDNameImp.Story PointsHow To DemoNotes
1001Allocate Employee to Project407Log In, Open Employee Detail Page, Change Employee Department from Bench to "Project X", Check the Employee Status in the Employee Detail PageNeed to create the class hierarchy
1002Check Employee Allocation History253Log In, Go to "My Detail" page, click on Project HistoryNeed to modify the css for this page.

As the author of this sample excel sheet said and I agree that these are the basic 6 fields we need at the end of the day. You are free to add up whatever fields you want but a easy going product can survive in these fields. I myself is fan of using excel as Product sheet, it gives you lot more freedom and great control. You can allow product sheet to be in control of the "Product Owner" or you can feel free to share in on a machine with all team members, allowing them to add new stories as and when they arrive but always remember that only "Product Owner" has the right to set importance or write estimates.

HOW WE DO PRODUCT "BACKLOGS"



The last thing you want to do is to place the Product Sheet in the version repository, a good idea would be to rather place it on the shared drive. This way you can allow multiple people to access the same without causing locks, merge conflicts etc.

Additional story fields



Sometimes you may want to use few more additional fields in the product backlog based on your project's priorities, I will try to list few which are pretty common.

Category– It is a rough categorization of the story, for example “UI” or “performance”. This way the product owner can easily filter out all “UI” items and change their priority , etc.
Components - Best way is to realize it by “check boxes” in the Excel document, for example “client, middle ware, server”. Team / Product Owner has to identify which technical components will be required in completing this story. This will be typically useful when you have more than one Scrum teams.
Requester – Here as part of reporting, product owner may like to keep track who has requested for the feature to keep him updated on the progress.

Bug tracking ID – It will be useful to keep track of any direct relation between a story and reported bug(s). Specially if you have a separate bug tracking system, like we have Bugzilla.

How we retain the product backlog focus at a "business level "



It is worse to have a Product Owner coming from Technical Background, but in our case we couldn't help it. We didn't found anyone else suitable. The nightmares it caused initially was adding stories like this one "Add composite primary key to the crystal_soak tables”. Why does he want this? The real underlying goal was definitely something like “to disallow user from adding same soak twice for a particular crystal”. It may turn out that there was no primary key since crystal can be associated to more than one soak and thus users were sometimes by mistake associating the same soak more than once. It was something completely different. The team is normally better suited to figure out how to solve something, so the product owner should focus on business goals.
When you see technically oriented stories like this, you should ask the product owner a series of “but why” questions until we find the underlying goal. Then we rephrase the story in terms of the underlying goal (“disallow users to associate duplicate soak to same crystal”). The original technical description ends up in the note column (“Creating a composite primary key might solve this problem”).

Once Upon a Time....

A Chicken and a Pig lived on a farm। The farmer was very good to them and they both wanted to do something good for him।
One day the chicken approached the pig and said, "I have a great idea for something we can do for the farmer! Would you like to help?"
The pig, quite intrigued by this, said, "of course! What is it that you propose?"
The chicken knew how much the farmer enjoyed a good healthy breakfast। He also knew how little time the farmer had to make a good breakfast. "I think the farmer would be very happy if we made him breakfast."

The pig thought about this. While not as close to the farmer, he too knew of the farmer's love for a good breakfast. "I'd be happy to help you make breakfast for the farmer! What do you suggest we make?"
The chicken, understanding that he had little else to offer suggested, "I could provide some eggs."
The pig knew the farmer might want more, "That's a fine start. What else should we make?"
The chicken looked around...scratched his head...then said, "ham? The farmer loves ham and eggs!"
The pig, very mindful of what this implied, said, "that's fine, but while you're making a contribution I'm making a real commitment!"
Chickens and PigsOn Agile projects the term Pig has come to describe all the developers, designers and testers who commit to the actual work। The term Chicken is applied to everyone else who make intellectual contributions but do not commit to any work।

Note: Images are borrowed from other sites*