A presentation given at MTUG 2016. The recording is available on the Code Genesys website as well http://www.codegenesys.com/start-with-quality/
Some inspiration from Henrik Knibergs presentation. Just so happens we didn't know of this until we had implemented it. It was so similar. We give credit to his material in the slide deck as well.
2. Agile mindset
Before:
Product changes based on internal SME’s knowing what changes needed to be made,
which was usually based on noisy customers, gut feel and stabs in the dark.
The only metrics measured where budgeted sales vs actual
sales, and churn. Team was accountable for missing targets.
Missed release deadlines blamed for missed sales targets, and poor
developer productivity blamed for missing delivery deadlines.
Development cycle of about 15 weeks of development, 3-
5 weeks of testing and then a release.
Team took no responsibility because the team was held accountable to deadlines they
didn’t agree to, and for solutions & features they weren’t consulted on.
What we saw often was short term solutions, ending up in long term problems.
5. Agile mindset
Now:
Autonomy and accountability. Manager’s role is to embrace collective responsibility and to help their
team excel and develop into leaders.
Measure our economic progress: Understand what we value: customer growth, revenue growth, user
engagement, higher employee retention.
Do a value proposition canvas and lean canvas business model for all new products and
large features. Is this the right action at the right time?
Have a hypothesis for each new feature and compare progress metrics with actual
outcomes. Have metrics indicating business health (LTV, CAC, LTV:CAC, MRR, Churn,
Customer Happiness Index).
Experiment with changes to the way we work then measure to test our hypothesis. Constraining the
feature pipeline, ready criteria, done criteria, agile testing methodology, moved towards continuous
testing and deployment.
Limit work in progress: Agile testing, deployment to test servers every week, deployment to production every
sprint (2 weeks).
Distributed leadership and responsibility where we help each other excel.
Digging deeper to find systematic causes, allowing us to come up with more permanent solutions.
6. Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8
Release
Test Fix %&@#!
Before: Test at the end
Test
Fix %&@#!
Test Fix Time
Saved
Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8
Release
Test Fix %&@#!
Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8
Release
What we do now: test all the time & release after every sprint
Release Release Release
Next: Test after each sprint and at release time
8. How do you ensure that your product works?
1.Understand the
problem
2.Iterate until
you’ve solved it
Who are the
stakeholders?
How will we know
when we’ve solved it?
What need do they have
that we want to solve?
How will we know if we’re
moving in the right direction?
Minimise the distance to MVP
Deliver, measure, adjust
continuously
Henrik Kniberg
9. The Value Proposition CanvasDocument
Plan
Constrain
Pipeline
Prioritise
Build
MVP
Iterate to
PMF
Optimise
10. The Business Model CanvasDocument
Plan
Constrain
Pipeline
Prioritise
Build
MVP
Iterate to
PMF
Optimise
11. Constrain the Feature Pipeline
Document
Plan
Constrain
Pipeline
Prioritise
Build
MVP
Iterate to
PMF
Optimise
12. Make sure the backlog items are testable & valuable
As a Consultant
I want to assign soil sample collection jobs to a
user
so that collectors know what jobs are assigned to
them.
Prototype: How to demo/test done:
1) Select one or more or jobs from
the unassigned jobs list.
2) Click “assign jobs”
3) Start typing a user’s name
and/or
4) Select the user from the list of
users
5) Click “Assign to selected user”
17. Henrik Kniberg
Set working agreements for test automation &
explicitly plan testing during sprint planning
18. Test backlog
Step 1: Decide what needs to be tested
● Add new sample job
● Delete sampling job
● Assign sample job to a user
● Unassign sample job from a user
● Mark a sample job as complete
● Filter sample jobs
● Sort sample jobs
● Which jobs do I have to do?
● Viewing a sample collection job
● Editing a grid on a sample collection
job
19. Step 2: Classify each test
Test Case Risk Manual Test
Cost
Automation
Cost
Add new sample job High 0.5 hrs low
Delete sampling job High 0.5 hrs low
Assign sample job to a user Med 2 hrs low
Unassign sample job from a user Med 0.5 hrs low
Mark a sample job as complete High 0.5 hrs low
Filter sample jobs Med 0.5 hrs high
Sort sample jobs Low 0.5 hrs low
Which jobs do I have to do? Med 2 hrs high
Viewing a sample collection job High 2 hrs low
Pay every
time
Pay once
20. Step 3: sort the list
Test Case Risk Manual Test
Cost
Automation
Cost
Viewing a sample collection job High 2 hrs low
Assign sample job to a user Med 2 hrs low
Mark a sample job as complete High 1 hr medium
Add new sample job High 0.5 hrs low
Delete sampling job High 0.5 hrs low
Unassign sample job from a user Med 0.5 hrs low
Sort sample jobs Low 0.5 hrs low
Filter sample jobs Med 0.5 hrs high
Which jobs do I have to do? Med 0.5 hrs high
Automate first!
Don’t bother
automating
Automate later!
21. Bug fixing process
Bug found!
Before
Log it
Release
test
Fix bugs
Bug found!
Now
Don’t log it.
Fix it NOW!
24. Metrics Before and After
• Defects
• Customer Satisfaction
• Employee Satisfaction
• Customer Retention
• Sales
• New Customers
• Less Work Better Results
25.
26. Agile is a direction, not a place
The important thing isn’t
how you work.
The important thing is
how you improve the way you work!
28. Training Events are Listed on our website including Agile
Testing
http://www.codegenesys.com/training-events
Editor's Notes
When I started at Agworld I was told that the company followed an agile approach. When I started work what I discovered was:
Because the development team had sprints and standups and sprint retrospectives etc. they said they are an agile development environment when in fact there were no inspect and adapt feedback loops because a few individuals in the company decided what features needed to be done (based on speculation) and by when it had to be done (based on when the company would suffer commercial consequences).
Much of this can be clearly seen in our metrics and stats that we have tracked over the last couple for years.
Defects created is red and defects fixed is green
Much of this can be clearly seen in our metrics and stats that we have tracked over the last couple for years
There are many more things that we have seen change as a result of the changes in the way we work:
Tester discovering requirements before coding begins rather than when code is finished
Finding problems earlier
40% less defects than previous 12 months so more successful at preventing defects, allow us to spend more time on other things.
Develops say they are better able to just get on with developing because defects are found earlier, so the code is still fresh in their minds. This is making the turnaround of fixes much quicker.
Although we still choose to release with known minor defects we have had release with zero known defects
Customers regularly phone with unsolicited feedback on how happy they are with the product and have noticed improved quality
We are always looking for ways to improve the way we are working and will always find things we can do better
LTV – Lifetime Customer Value
CAC – Cost Customer Acquistion
MRR – Monthly Recurring Revenue
We needed to close the distance between coding and testing. We’d code often for 14 weeks before any testing. We closed this gap by testing the previous sprints’ work at the beginning of the next sprint, but only logged the bugs in our defect tracking system and still left the bug fixing to final release testing time.
We didn’t really connect with the fact that the work wasn’t complete when there were known defects and we still saw finding bugs as QA’s responsibility. We had to move our mindset to defect prevention, bugs take priority over new code and quality is everyone's concern.
We had many of the team attend an agile testing course run by Nick Zdunic from aboutagile and that helped us start taking the shift to bug prevention.
Where we are now is continuously testing as developers are done with stories, fixing bugs takes priority over new code and releasing at the end of every sprint. There are no more bug fix releases because all the changes we have made have lead to less defects in product and prevent new defects rather than finding and fixing them, which has meant that the release every sprint includes bug fixes and is is frequent enough .Now every sprint includes 20% bug fixing from the outset and a percentage for technical debt. I hope to share some of our journey.
What we ended up doing was testing the development build every week and logging the bugs in our defect tracking system but still not fixing them until all the coding was complete and we were ready for release testing. We didn’t really connect with the fact that the work wasn’t complete when there were known defects and we still saw finding bugs as QA’s responsibility. We had to move our mindset to defect prevention, bugs take priority over new code and quality is everyone's concern.
We had many of the team attend an agile testing course run by Nick Zdunic from aboutagile and that helped us start taking the shift to bug prevention.
Where we are now is continuously testing as developers are done with stories, fixing bugs takes priority over new code and releasing at the end of every sprint. There are no more bug fix releases because all the changes we have made have lead to less defects in product and prevent new defects rather than finding and fixing them, which has meant that the release every sprint includes bug fixes and is is frequent enough .Now every sprint includes 20% bug fixing from the outset and a percentage for technical debt. I hope to share some of our journey.
This is the general steps in building a product and there were some things that we needed to do better. Specifically understanding the problem, and when we are done and have shipped the MVP to product, remembering that we need to iterate to product market fit. Basically we couldn’t test whether the product did what was intended because we started building the product before fulling understand what what we were solving for our customers, which meant we didn’t really have a definition of done and therefore testers didn’t know how to test whether the requirements where met.
A couple of things we felt we were not doing as well as we could was building the right features, or at least building them in the right order. And, we were always trying to do more than our capacity would allow and therefore always overwhelming the team. What we believed was a way to think better about:
whether we really understand the problem and had done the critical thinking required
whether working on a specific feature was the right thing to do at that point in time
whether what had been learnt while doing the previous two items could be easily and effectively communicated to the rest of the company, so that when product was being built there was clear understand of these things and therefore clearer understand of how to test.
This also led to other side effects.
Using the MVP thinking process we discovered that many features not actually used as a by product. Removed a feature, 800,000 lines worth of code.
We realised we were building stuff no one wants and created bloated code base which we though maintain. Example fixing failing tests on upgrade of Ruby.
Saved time – take that to the bank. This talk is about testing and removing this feature meant we were not regression testing a feature that nobody used, leaving more time for exploratory test during development.
How we did it
We started looking at some of the tools people have been using to help with these things and adopted the ones we felt would work for us. We started using the Value Proposition Canvas and the Lean Canvas to help us do the learning and critical thinking required to decide if a certain action was the right action at that time. These tools are also great ways to simply communicate the process of discovery that has happened and a summary of what was discovered.
This process naturally helped constrain the feature pipeline because rather than trying to build everything we could think of, we first decided whether it was the right action at the right time.
Running Lean.
Do the steps? Structured. Previously arrogantly thought we understood the problem.
Our attitude was ‘Yeah we get it’
Do we have a value proposition?
This is a structured way to help design, test and build the value proposition during the process of designing a business model. It is supposed to offer an easy to understand overview of: what a customer needs and why; the pains and aspirations of the customers; whether the designed solution truly solve the customers problems and fulfil the promise of doing something in a better way.
For Problem Solution fit, the value proposition should neatly map to customer needs. First, the product should accomplish the majority of the customer’s jobs. Second, the product should eliminate the customer’s pains. Last, the product should enable to customer to do more than they could before.
The Value Proposition Canvas should be completed in a way that shows how the designed products and services map to customers jobs and pains so that it serves as a clear communication tool to stakeholders. This document service as the highest level specification for the solution.
Assuming we have decided that it is a problem worth solving, what metrics will you use to track our progress?
This is a structured way to help design, test and build a business model for the value proposition. If we have determined through the Value Proposition Canvas that we have a customer value proposition, we then need to determine whether there is a business model to sell this value proposition to customers and make money. The objective of the Business Model Canvas is to make the business model as actionable as possible – a ground up tactical plan to guide us as we navigate our way from idea to building a successful product. How we help to make this more actionable is to try to capture that which is most uncertain (and also most risky).
Anecdote: We don’t always get it right. There is a need but how much of a need? Metrics are know giving us a better idea.
Survey Anecdote: Farmers given 10 options. Each option scored 10%. Not always easy. We decided to do nothing.
Kanban is a method for managing knowledge work with an emphasis on just-in-time delivery while not overloading the team members.
The throughput of the pipeline as a whole is limited. Kanban in its simplest form limits work-in-progress, at each step in the process. This prevents over production at any point in the process and reveals bottlenecks dynamically so that you can address them before they get out of hand.
At our current size Agworld can only work on 3 initiatives simultaneously and we need to constrain the number of initiatives entering the operational and implementation phases. Existing features constantly require maintenance and improvement and one of the three initiatives in progress will always be improving existing features. Therefore it is possible for only two new initiatives to be in progress simultaneously with improving existing features work.
Development Board had several columns, discovered work building up in Code Review. Put in a WIP limit to ensure flow. Change to a mindset of focusing on completion. We started reaching sprint goals.
We can make a difference by small changes. WIP limit is small but a significant difference.
Customers can look at features whilst in development – decreasing the feedback loop.
First demo is the User Story Map.
Build a Minimum Viable Product and Verify
We should have already determined that there is a problem worth solving and designed a high level solution, however, we don’t know whether that solution will resonate with the majority of customers. Therefore we want to do the least possible amount of work to reach a point where we can verify that the solution resonates with customers. We want to do a number of validated learning loops:
Formulate a testable hypothesis
Build (maximise for speed, learning and focus)
Measure and Learn (Validate Qualitatively, Verify Qualitatively) (Create accessible dashboards and communicate early and often)
Everyone needs to understand from the beginning. Not understanding at the beginning leads to more work. Not when QA starts. Make sure that things are written in a way that relevant people know how to test. On the left we have who what why and on the right we have steps to get there in the software.
Not too bad as we are all physically near each other, but we had no permanent teams. We would create teams based on what work was in the current roadmap. It takes time to develop the bonds of loyalty and trust, so teams that work together longer can develop these bonds you need for real candour and openness. Not having a QA person part of a development team can affect the quality mindset of the team.
We now have permanent cross functional teams that usually collocate. Sometimes people move to a different location for a short period of time and sometimes testers are all sitting around the same desk when doing regression testing before a release.
Environment is important!
Remote Developers and Testers – they are participate in ceremonies, pair programming with remote workers, hackathon
The process needs to support fast delivery to test, customers etc
Set working agreements for test automation. Explicitly plan your test automation. Do exploratory tests. Efficiency in the processes means more time for exploratory testing.
Need all the bits to be able to test during development as soon as possible, as well as get customer feedback.
All improved in big ways
From - Great Boss Dead Boss
Throughout the entire organisation
Being able to achieve these kinds of changes at Agworld was only possible because I have a great boss who is dedicated to me and my team succeeding and it was important to get the backing and support of my boss before making significant changes to who we work. Many changes need to allow agile testing were before development started (understanding the problem better etc.) and therefore needed the support of my boss to be achieved.
I recently came across a presentation by Henrik Kniberg showing the path the followed and felt that we had followed similar paths without knowing it. This seems to provide evidence of the benefits of agile testing since we both followed similar paths of got the same outcomes.