Unveiling the Significance of Measure, Improve, Repeat: Empowering Agility in Today’s World

In today’s fast-paced and ever-evolving business landscape, organizations continually seek ways to optimize their processes and efficiently deliver customer value. One mindset that has gained widespread recognition for its adaptability and iterative approach due to its methods and practices is Agile. As an Agile practitioner myself, I firmly believe in the power of the Measure, Improve, Repeat cycle.

Photo by rc.xyz NFT gallery on Unsplash.

In this article, I will delve into the importance of this cycle and highlight a few potential downsides, ultimately emphasizing the significant advantages it brings to the Agile world.

Measure: Setting the Foundation for Success

At the heart of any successful Agile project lies the crucial step of measuring. Agile methodologies rely on gathering relevant data and metrics to gain insights into the team’s performance, identify bottlenecks, and gauge progress accurately. Through careful measurement, we understand what works well and requires improvement, enabling us to make informed decisions.

By establishing a robust measurement framework, Agile teams can track key performance indicators (KPIs) and metrics such as cycle time, velocity, and customer satisfaction. These metrics provide valuable insights into the team’s efficiency, the quality of deliverables, and the overall effectiveness of Agile practices within the organization. The ability to measure progress and adapt accordingly is paramount for continuous improvement.

Improve: Embracing Iterative Enhancements

Agile is synonymous with continuous improvement; the “improve” phase is pivotal in this iterative methodology. Armed with the insights gained from measurement, Agile teams can proactively identify areas of improvement and take actionable steps to address them. This collaborative and adaptive approach allows teams to optimize their processes, enhance productivity, and deliver better results with each iteration.

Continuous improvement in Agile is not limited to the development process alone; it extends to all aspects of the project, including communication, collaboration, and feedback mechanisms. By fostering a continuous learning and improvement culture, Agile teams can leverage their collective intelligence to find innovative solutions and adapt to changing requirements swiftly.

Repeat: Ensuring Long-Term Success

The final phase of the Measure, Improve, Repeat cycle, “repeat,” encapsulates the essence of Agile’s iterative nature. Agile embraces repetition rather than relying on a one-time process to achieve sustainable success. By continuously measuring and improving, Agile teams can iterate through cycles of development, feedback, and adaptation, ultimately enhancing the overall project outcomes.

Agile’s emphasis on repetition encourages teams to reflect on their successes and failures, refine their processes, and adopt a growth mindset. This iterative approach leads to a virtuous cycle of continuous learning, innovation, and high-quality deliverables.

Potential Downsides: Navigating the Challenges

While the Measure, Improve, Repeat cycle offers immense benefits, it is essential to acknowledge and address a few potential downsides. One challenge is the risk of analysis paralysis, where teams become overly focused on data collection and analysis, losing sight of the larger objectives. Additionally, continuously iterating and making changes can sometimes disrupt the project’s momentum, impacting deadlines and stakeholder expectations. It is crucial to balance measurement and action to avoid these potential pitfalls.

In conclusion, the Measure, Improve, Repeat cycle has emerged as a cornerstone of Agile practices in today’s rapidly evolving business landscape. By measuring key metrics, embracing continuous improvement, and repeating the cycle, Agile teams can foster a culture of excellence, adapt to changing requirements, and consistently deliver customer value. While challenges such as analysis paralysis and maintaining momentum exist, a mindful approach can help mitigate these downsides. Ultimately, the Measure, Improve, Repeat cycle serves as a guiding principle for achieving success in the Agile world, enabling organizations to thrive in a dynamic and competitive environment.

Hyper-Productive Metrics with Jeff Sutherland and Scott Downey

I’m happy to share that I just finished my registration for the hour-long course about Hyper-Productive Metrics with Jeff Sutherland and Scott Downey.

This is going to happen on August 26th 2015 and I’m looking forward to learn more about this topic from these guys experience.

To give you some context please find below the course description.

Hyper-Productive Metrics Course

HyperProductive-300x200

“In Scrum, beyond velocity, which metrics matter? Which metrics apply across teams? What do you measure at scale.

In their groundbreaking paper Scrum Metrics for Hyper-Productive Teams: How they Fly Like Fighter Aircraft, Scrum Inc. CEO Jeff Sutherland and legendary Agile coach Scott Downey of Rapid Scrum, created best practices for accelerating Scrum teams and the metrics used to fine tune them.

Join Jeff and Scott August 26th at 11:00 EDT for an hour-long course with a live Q&A follow up. See how they’ve iterated on the original metrics and what they’ve learned as they have further applied them.” [1]

References:

[1] Hyper-Productive Metrics | scruminc.

AGILE ADRIA CONFERENCE 2015 :: TALKS “How” to go beyond Scrum and Agile Performance Metrics

Was with great pleasure that I found out this yesterday.

“It is a great pleasure for me to tell you that your talk proposal for Agile Adria 2015 was accepted. We’re looking forward to meet you in April.”

I’m really happy to be doing these two talks described bellow:

Also, share and learn with everyone!

Nevertheless, now is time to start working and find sponsors for the trip 😉

Should we establish Business Value per each Work item?

One of my previous posts was related with Agile Perfomance Indicators and their use in order to understand teams and business behaviour.

From that moment, I didn’t stop thinking how could we improve one of these performance indicators. In other words the “Delivered Work Items”.

As everyone knows, when we deliver a work item we are delivering or should be delivering value. Nevertheless, I do agree this is a complex and very subjective metric to try to come up with.

Screen Shot 2015-02-06 at 23.39.15

After having surfed a bit the web I found an interesting article on this topic. Why not establish Business Value Points per each work item?

Roman Pichler in the book Agile Product Management with Scrum says:

“Value is a common prioritisation prioritisation factor. We certanly want to deliver the most value items first.” [1]

Well.. I do believe that, with this approach the Product Owners have a better way to understand each work item and their priority to be implemented by the teams.

Nevertheless, after discussing this subject and article I received the following question:

“What happens when a team starts to get consistently work items that have low points? Do they get demotivated?”

My thoughts and my answer to this question was:

A team shouldn’t just receive US with low Business Value Points. What can we do to change that?

Also, should we implement a feature that has low business value, or should we pick up features with more business value?

What are your thoughts about this subject and approach?

References:

[1] Roman Pichler | Agile Product Management with Scrum: Creating Products That Customers Love 

Agile Performance Metrics… Are they useful or not? Which should we use?

Today’s post is related with a hot and controversial topic that is the Agile Performance Metrics.

Why? Well, as you might be aware we have different opinions from from different stakeholders on what concerns Agile Performance Metrics.

  • Why should we use?
  • What value they bring?
  • How should be applied?

For some time all the discussions that I had regarding this subject I’ve encountered people with different opinions as described bellow:

  • We shouldn’t use the performance metrics because they don’t bring any value.
  • We should’t use the performance metrics because stakeholders and management will start to compare teams.
  • We shouldn’t use performance metrics because we don’t have indicators that shows the real work of the teams.
  • Etc.

Nevertheless, I also encountered positive opinions as described bellow:

  • They are great since we can understand what is going on and tackle issues as soon as soon as possible.
  • We can understand the business and teams behaviour.
  • The information is already there and this will allow us to have a clear view of what is going on.
  • We an be assertive when we need to take decisions since we have detailed information.
  • Etc.

Textbook definition tell us that “A performance metric is that which determines an organization’s behavior and performance. Performance metrics measure an organization’s activities and performance. It should support a range of stakeholder needs from customers, stakeholders to employees.”. [1]

Saying this, my opinion is that the primary objective is to provide the teams and the business with metrics that enable them to track their performance over time against its previous performance at any time.

Why should we use it?

  • We should use it since this help us to measure the impact of our initiatives.
  • For example, focusing on quality, we expect to see a drop in defects and if not, then our strategy isn’t working and we need to try something else.
  • Again emphasising that this is not for comparing teams but it’s to look at our wider initiatives, checking if they are having the desired effect.
  • Nevertheless, this will allow us to achieve our common business objectives by taking actions at the correct time and increase the performance for all our teams.

Many people believe that from an Agile perspective we should only use velocity as metric. Meaning that, with this metric they can understand the business and team behaviours.

Well… I agree that velocity is a good metric, but I also believe that in order to keep improving we should implement new indicators to have a more accurate information regarding behaviours.

Saying this, in my opinion we should use the described bellow:

  • Delivered Work Items

As you are aware one of the Agile Principles is continuous delivery of valuable software. [2]

For that we have the acronym INVEST originated in an article by Bill Wake in 2003. This helps to remember a widely accepted set of criteria, or checklist, to assess the quality of a user story. If the story fails to meet one of these criteria, the team may want to reword it, or even consider a rewrite (which often translates into physically tearing up the old story card and writing a new one). [3]

A good user story should be:

  • “I” ndependent (of all others)
  • “N” egotiable (not a specific contract for features)
  • “V” aluable (or vertical)
  • “E” stimable (to a good approximation)
  • “S” mall (so as to fit within an iteration)
  • “T” estable (in principle, even if there isn’t a test for it yet)

Saying this, in my opinion Delivered Work items measures how much work a team delivers over time, based on the number of work items accepted per team member.

More work items delivered means more business value has been delivered or more technical debt has been reduced.

Never forgetting that points are a relative measure for each team. Each Item accepted applies through the DOD.

What does this implicate?

We want to give value to smaller work items (INVEST – Independent, Negotiable, Valuable, Estimable, Small, Testable).

Screen Shot 2015-02-07 at 00.07.53

Throughput is the number of items completed over a period of time per full-time person on the team. In this variant it’s the number of work items moving into the schedule state “Accepted” minus the number of work items moving out of “Accepted” within the timebox, divided by the number of full-time equivalents.

Why this is important?

Throughput shows how well work is moving across the board to accepted. This can be adversely affected if work items are frequently blocked or inconsistently sized.

  • Consistency of Delivered Work Items

Consistency of Delivered Work Items measures how consistent a team is at producing work over time. High consistency means stakeholders can confidently plan when work will be delivered.

This is also an indication that teams are better at estimating their work and flowing consistently-sized increments of value.

Consistency score is a percentile ranking derived from a three month window of a team’s Variability of Throughput. Predictability can also be configured to include variability of velocity or the standard deviation of time in-progress.

What does this implicate?

Variability of throughput is computed by finding the average (mean) and standard deviation of throughput for 3 month time periods. The coefficient of variance is the standard deviation divided by the mean.

Screen Shot 2015-02-07 at 00.09.13

For the Throughput StdDev and Throughput Mean we could use the last 3 months.

Why is this Important?

Consistency is critical to accurately planning delivery dates and budgeting.

The variability of Delivered Work Items shows if your team delivers the same amount of work each month. This can be adversely affected if Work Items are frequently blocked or inconsistently sized.

  • Work Items Lead Time

Work Items Lead Time measures the ability of a team to deliver functionality soon after it’s started. 

High Lead Time can dramatically impact the ability of businesses to react to changing markets and customer needs.

What does this implicate?

The Lead Time score is a ranking calculated by the amount of time required for a team to take a single story or defect from start to finish.

Screen Shot 2015-02-07 at 00.10.14

P50 is the Median (Sort and pick the middle for those who didn’t go to school)

Note: It’s Median and not Average because there are usual irregularities that would negatively affect the average (for example, a single user story that took 2 weeks to complete).

Why is this Important?

Low Lead Time is central to Agile. Teams that are more responsive have faster time to market, and react more quickly to customer feedback.

Time in Process is the number of business days a typical user story is in the “In-progress” or “Completed” column. Similar to lead time, cycle time, and time to market.

Accurate planning and budgeting requires work items that are estimated to be short, testable and easily deliverable. Time in Process shows how long the median of Work Items are in process.

  • Quality

Quality measures how disciplined a team is in preventing defects.

What does this implicate?

Screen Shot 2015-02-07 at 00.10.54

All defects should be assign to each team accordantly with their Product/Component ownership and User Story. Worst case scenario in case that we are unable to do that, I believe that its more fare if we count all defects from the Product/Component and then we split trough all the teams working in the Product/Component (this means that every team inside the same Product has the same Quality).

How could combine these four metrics in order to understand the global behaviour?

Please find below two exemples on how could we combine the four metrics in order to get a simple, tranparent and quick view that can be read by all stakeholders including Management.

  • Performance Chart (Higher is better)

Performance Chart

This chart provides a way for business and teams to track their performance over time against its previous performance.

  • Overall Chart (Higher is better)

In the Overall Chart, each metric (described below) weighs 25% of the performance, totalling 100%. Once more provides a way for business and teams to track their performance over time against its previous performance.

Overall Performance Chart

Its important to keep in mind that each metric should be calculated at each month. This way we can avoid any issue regarding different Sprint start and end time by any team.

Why is this important?

Sustainable delivery of value requires a consistently high level of Quality for all work delivered to production.

References:

[1] Wikipedia | Performance Metric

[2] Agile Manifesto | Principles behind the Agile Manifesto

[3] Agile Alliance | Invest