New ways of working that help you grow and succeed

The No-Nonsense Agile Podcast

The No-Nonsense Agile Podcast

Quality in the Agile world

Summary

Join Murray Robinson and Shane Gibson in a no-nonsense Agile discussion. In this podcast, Murray and Shane discuss Quality in the Agile world. We define quality and explain why it is everyone's responsibility. How testing and development are integrated in one agile team. Shifting testing left and Defining Ready and Done. Why waterfall projects have huge numbers of defects. Whether there is any role for a specialist Tester in an agile team. Test automation, tech debt and aiming for zero defects. Test guilds and communities of practice.

Shane Gibson 

Welcome to the no nonsense Agile podcast. I'm Shane Gibson.

Murray Robinson 

And I'm Murray Robinson.

Shane Gibson 

Today, we're going to talk about quality.

Murray Robinson 

Yes, we are. 

Shane Gibson 

For me quality is such a big area, I don't know whether we're going to talk about quality assurance and how that role fits within an Agile team or quality overall. Should we do it or should we wait for our customers to find the things they don't like about the product? For us, as a way of quick market testing? You start us off, where do you want to start off on the world of quality within Agile

Murray Robinson 

I have mixed feelings about what you said. An Agile team should aim for zero defects or very few defects in production. But if you don't know for sure what your customers want, then it is a good idea to get something out to show them as soon as possible and get their feedback. But anything you put out to people should work or appear to work.

Shane Gibson 

Within the software world, there's the idea that if you have a pulldown widget on your screen then a good test would be that a little filter thing should pull down with a list of values. When you click on one of those values then the system should do the thing that you expect it to do. For me those tests are easier to write than tests for data. To write a test for data, we have to know how many customers there are on the first of December at one pm. But when we are testing data we don't know what the actual answer should be. Let's rip into it. How do we ensure quality?

Murray Robinson 

Let's define what quality is, that might be a good start. For me, quality is something that is acceptable to your customer, user, internal client or product owner. They define quality because you're doing things for them and the things you produce have to meet their needs. You need to find out what they need, and have some agreement on what it is that you're going to do. I would call that the definition of done. Then you should make sure that you do what you agreed to do.

Shane Gibson 

I have a slightly different approach. When I'm working with a team, I get them to define the definition of ready, definition of done and definition of done, done. Funnily enough, I'm working on the definition of Done, Done, Done. Our definition of ready says that the work we're about to do is ready to be worked on. The acceptance criteria have been written, backlog grooming has been done, and there's a bunch of things that we want to check to make sure we're ready to go. The definition of Done is the team standards to get things ready for acceptance by a product owner. It includes peer review, performance testing and other things the team expects to happen before they give it to somebody else to review. The definition of Done, Done is typically something around the acceptance criteria that the product owner or the stakeholders have given the team. It means making sure that it meets the objective we were given. For me, those are the things that have to be put in place to make sure we have a quality product coming out the other end.

Murray Robinson 

And I guess Done, Done, Done means it's working in production for people to use.

Shane Gibson 

It's one step further. It means that we deliver the value that was expected from that piece of work.  If we're changing a sign up screen to get 5% more sign ups then we are Done, Done, Done when that value is achieved.

Murray Robinson 

That's a fourth done, done, done, done.

Shane Gibson 

For me, pushing to production is a mechanic that should be done by default. It shouldn't be something we celebrate. It should just happen.

Murray Robinson 

I agree that there's a difference between having something working in production as it should and having something that meets the business objective. It can take quite a while to find that out though. You don't usually know that straightaway. You've got to run A/B tests to see whether you improved your signups percentages.

Shane Gibson 

It's hard as a product owner to define good acceptance criteria that we can use to make sure we meet the goals. That's hard and then defining a business metric that we would expect to move as a result of this investment is hard as well. I'm not saying it's easy.

Murray Robinson 

I would say that a 5% increase in signups isn't a measure of quality. It's a measure of a business outcome. I wouldn't say that the team isn't done until they've increased signups 5%. It could be a sprint goal or a project goal but it's not the measure of the quality of a feature. I'd say this feature was done but it didn't achieve the goal as much as we wanted. Now we're going to do another iteration of the feature.

Shane Gibson 

For me the definition of Done, Done, Done doesn't measure the delivery team, it measures the product owners within the organisation. We often see teams spend a large amount of time getting data ready, creating metrics and putting it on dashboards but when we monitor that dashboard, nobody uses it. There needs to be some rigour around the quality of how we make decisions or what we invest in that goes beyond the delivery team. They're doing the thing that's next on the priority list. It's the rest of the organisation that's trying to define it. For me, that's why Done, Done, Done is important. It's about measuring the outcome we achieved vs what the organisation expected.

Murray Robinson 

Okay, I suppose that's a measure of the quality of the requirements you're getting from the product owner to make sure that we achieve the business goal.

Shane Gibson 

What's the point in spending that money without achieving the business goal?

Murray Robinson 

You can have defects in the requirements, design, architecture, code, integration and implementation. There's defects everywhere. Normally, when people talk about defects, they mean the defects in the code and downstream from there. Generally, people ignore defects in requirements, design and architecture. They usually say that it's a change request.

Shane Gibson 

Quality assurance is hard. I've seen developers do peer reviews where they say use tabs, not spaces to format your code. Then they say I ran the code and it executed and did not fail. Peer review done.

Murray Robinson 

That seems low value for a peer review.

Shane Gibson 

The quality of the code is questionable. We didn't look at how fast it ran. We didn't look at if it was producing a result that met the acceptance criteria. We didn't look at how maintainable the code is. We didn't look at how well documented it was. There's a whole lot of QA tasks that we should do over and above. It's nicely formatted, and it runs without an error.

Murray Robinson 

My experience with waterfall projects is that you get large numbers of defects at the end. I was running a large waterfall project for a Telco about 12 years ago, with a large well known Indian service provider, doing the coding for us. When we got into acceptance testing we found 1000s of defects. And we were getting defects on defects. When tested defect fixes we found they'd only fixed half of the problem and the other half was still there. We triaged the defects into a severity 1, 2, 3 or 4 bucket. Severity four is a trivial user user experience issue. Sev three is a minor functional thing you can work around. Sev two is a major functional problem that you can work around. Sev one is a major functional problem that you can't work around, basically the application does not work. We said that in order to deploy we had to fix all the sev ones and twos and some of the sev threes. Our stakeholders decided that it was more important to hit the date than to fix all the problems so we went into production with 200+ known defects that Ops had to fix. That's typical for waterfall projects and it's common on fake Agile projects. But on real Agile projects with a good team, we've gone into production with very, very few defects. The only defects we had were ones where the production environment was set up differently than our integration test environment. Agile teams achieve very high quality by fixing the defects as soon as they find them.

Shane Gibson 

I'd go one step further. The way you fix your quality is to make QA a team problem on day one. I assume that the outsourced provider on your waterfall project had a separate testing team? 

Murray Robinson 

They did. 

Shane Gibson 

And the developers got the requirements from the BA and then they wrote some code, and they threw it over the QA wall. The QA people were paid to find bugs, because it's what they focus on, that they can go and find two types of bugs. A bug where the code didn't run, and then a bug where the code didn't meet the requirements. But to do that, they have to re-interpret what the requirements were and what the developers codes doing. Then compare those two to see whether it passes a QA test or not. Now we've got three teams, writing three different things. With each of those teams interpreting those artefacts in their own way to meet the acceptance criteria. Handoffs and conversations via documentation are the first thing we have to fix when we talk about QA in an Agile team. 

Murray Robinson 

Well, the handoff behavior was bad. This service provider had a development team that was handing off work to the test team. Their managers put a lot of pressure on the test team to pass things because they were running late. So they passed all sorts of rubbish on the basis that we'd find it in UAT and then they'd have more time to fix it. And as a result when we were testing in UAT, we found 1000s of defects.

Shane Gibson 

Let's say the QA team did two weeks of testing and found a couple of real bad sev one bugs that we need to fix before we go live. When the developers fix the bugs we ask the testers to fix them and nobody wonders what that change has done to the other bit's of the code. We might do some regression testing, but we tend not to test all the code,

Murray Robinson 

I've found that test teams in waterfall always do regression testing if they are allowed time to do it.

Shane Gibson 

Yes, and if the tests are automated then they push a button and the regression suite runs.

Murray Robinson 

They wouldn't be automated in a waterfall team. That's going to make it hard for them.

Shane Gibson 

There's no reason why a waterfall testing team couldn't automate the tests. It's not just Agile teams that should automate tests.

Murray Robinson 

I tried to get our Indian service provider to automate their tests. They put on an extra automation test team to do it. There was the dev team, the test team and the automation test team that automated things after the testers passed tests. But the developers and testers didn't run the automated tests, because it wasn't for them, it was for me. It was supposed to be for them but they didn't care. That didn't work. But I'll tell you, what does work well is having a definition of ready to develop. We have a feature or a story. It's got objectives, a description of the function and testable acceptance criteria. I get the QA to work with the BA, the Tech lead, and the Product Owner to make sure that the story is ready. Normally you write the requirements and then somebody else comes along and writes acceptance criteria later, but the best way to do it is to write testable requirements.

Shane Gibson 

It's hard. Often you'll find that the testing skills within the core team are at the novice or practitioner level because testing skills have been outsourced to another person or group. I'm a great fan of bringing in somebody with expertise and coaching level skills in testing. Skills in how to define and write good tests up front and automate those tests. But what we're doing is increasing the maturity level of the team members testing and QA skills. That's what we should be doing. I agree with you that well defined acceptance criteria in your definition of ready increases the quality of your product. Your developers are looking at those acceptance criteria as they're writing their code and testing the code against those criteria because they know what they are. That's what most good developers will do if that information is available at the beginning.

Murray Robinson 

Of course, because it defines what the outcome is supposed to be. You were talking before about different people having different understandings. If you get a story ready together, then everybody has the same understanding of what it would look like when it was done. it produces a lot of clarity, which means you're much more likely to achieve what you said you're going to achieve within the sprint for that story.

Shane Gibson 

It's the same with definition of done, if the team are defining the things they expect to happen when you develop, then each of the developers are baking that in as they work. A lot of architecture and technical failures we see will disappear, because the definition of done means the developers know what they're coding towards. We will see better quality because those things have been set out up front.

Murray Robinson 

Sure, I think you're done developing when the developer thinks that they've achieved the acceptance criteria, you've got automated unit and acceptance tests and a peer code review has been done for maintainability. Then I like to have the developer review the feature with the tester that is sitting in the team. Automated testing is essential for DevOps and CI CD. But sometimes people can't automate all their tests. It's very useful to have a tester in the team who helps the team with quality assurance by defining ready and so on. But it's also helpful to get an independent test of what the developer has done. I find that developers get tunnel vision. They make assumptions, even if you've got a well defined acceptance criteria. They will test what they think of testing. But it needs independent checking by somebody else. That could be done by another developer but other developers tend to be pretty bad at testing. They aren't thinking broadly enough, they're thinking technically. You need somebody who could be the product owner or it could be a BA. But I find that testers are good at thinking about what a feature is supposed to do and good at finding ways to break things. A different point of view is helpful.

Shane Gibson 

I'm going to agree with you and I'm going to disagree with you in the same sentence. I agree with you that we often need expert or coaching level testing skills in the team. If the team doesn't have those skills then having a person who's sitting next to you to look at it and say, “What about this?” is a good behaviour. But I'm going to disagree with you that they should be a tester who never touches the keyboard. As developers we fall back into that behaviour of I'm going to write some code and somebody is going to test it. Whereas the developers should be testing. But I agree that they need some help to improve their skills and to look broadly at what test should be applied.

Murray Robinson 

A developer should be testing. I didn't say they shouldn't. It's just that developers usually have a narrow technical point of view about what they're doing. I'd like to have a QA as part of the development team to help. Scrum has three roles, the development team, the product owner and the Scrum master. But the development team does not mean software developers. It's any combination of skills required to achieve the outcome. Your development team could include a business analyst, quality assurance specialist, user experience designer, a front end developer, back end developer and somebody who's good with infrastructure. It doesn't matter. The combination of skills gives you the ability to do everything end to end well. I like to have that set up.

Shane Gibson 

It's semantics. I'm going to agree and disagree in the same sentence again. Each of those skills are important. Each of those skills is not a role. I use a T skills framework with teams to figure out where we have strong and weak skills, where we are doubled up and where we have skill gaps. We have people who have good business analysis and facilitation skills. We have good people who have good coding skills and people who have good testing skills. People who are good at architecture skills, people who have expertise that is quite broad who tend to have tech lead type of behaviour. But our goal should be that everybody has as strong skills in each of those areas as possible so we get that breadth across the team. My view is that these skills are not roles. We don't have a BA role in the development team, we have one or two people on that team who are developers with strong BA skills.

Murray Robinson 

Developers are generally very specialised. It takes a lot of time and effort to learn how to be a good developer. And a good developer is likely to be a crappy BA. A developer could become a good BA but they are going to have to invest a lot in a skill set that they're weak in. Then they are not doing the thing that they're good at. That's a waste of time and talent. I agree with you that you want T-shaped people in an Agile team. But a team of people who are specialists with secondary skills is much more powerful than a team of generalists.

Shane Gibson 

I definitely agree. I'm not saying that you can get a team of unicorns where everybody's at coaching level for every skill. People have a strong speciality and some other skills they are developing. But what we're looking for is skills not roles. Once we say you're the BA, you're the developer and you're the tester then people start handoff behaviour.

Murray Robinson 

You seem to be very sensitive about this and I don't see it as being a problem.

Shane Gibson 

I've observed that when people have specific roles, we get handoff behaviour. If we have a tester on the team then everybody hands off testing to that person. That's the behaviour that we want to discourage in a team.

Murray Robinson 

There's always handoffs in a team and it's impossible to get rid of them, you want to reduce them.

Shane Gibson 

You want to minimise them, and you want to make them as consistent as possible.

Murray Robinson 

It's bad for a Scrum team to hand code over to a separate test team to do their testing in the following sprint and then fix the defect the sprint after. That's not very Agile. But I'm talking about a situation where you have a team with a few developers, a BA, Designer, QA and Product Owner. People who are called that because they've got 5 - 10 years experience in that field. The BA is a fantastic analyst and a poor coder. They add a tremendous amount of value to developers by doing analysis well. I would have a problem if the developers take no responsibility whatsoever for the requirements because there's a BA there. If they know the requirements are crap and they build what they say without question then that would be bad behaviour. But I don't see that in Agile teams that have roles.

Shane Gibson 

Maybe it's a continuum. Handing off work to another member of your team as if they're an external team is the behaviour we want to discourage.

Murray Robinson 

I agree with that, but I don't see that behaviour. People have careers in these different specialties. People specialise in front end development, back end development, infrastructure, business analysis, design and testing. If you want good user experience design then you need to employ a good user experience designer. If you ask your developer to do your user experience design you're going to get something pretty bad.

Shane Gibson 

We're arguing semantics. For me, those semantics are important. You're hiring people with strong skills in those disciplines. You're not hiring them for that role. You're not hiring them to be the team UX designer, you're hiring them because they have strong UX skills that the team needs. For me, changing the way we talk about it ends up changing the team behaviour a lot.

Murray Robinson 

You're assuming that if you have a role in the team, then the team will engage in waterfall behaviour. I don't think that's true at all. Therefore, I don't see a problem in having roles in the team.

Shane Gibson 

I have seen that behaviour happen sometimes. Like you said in Scrum, the role is developer or a member of the development team. If we talk about people's skills, not their roles, then we're safe. We don't need to talk about roles because it's not a role. It's a set of skills.

Murray Robinson 

What do you think a role is? Product owner is a role.

Shane Gibson 

But that's different, isn't it? We've got a Scrum Master role, Product Owner role or member of the development team.

Murray Robinson 

It's only different because Scrum says it is but we don't have to do Scrum to be Agile.

Shane Gibson 

But we do hand over to a product owner on a regular basis.

Murray Robinson 

What do you mean by hand over?

Shane Gibson 

We ask them for the acceptance criteria and they give it to us. When we finish development we ask if the work meets your acceptance criteria. By defining those roles, we put in a natural air barrier and we get hand over behaviour because of it. Same with a Scrum master. The Scrum Master runs the Scrum ceremonies and provides Scrum guidance for us. By defining it as a role we naturally put an air gap in and we encourage hand over behaviour. That's why I'm against talking about roles within a team and I prefer to talk about skills.

Murray Robinson 

If you're going to be logically consistent then you should object to the role of product owner and Scrum Master and only have developers.

Shane Gibson 

Well, I haven't got there yet. I haven't figured out how a development team can do the product ownership behaviour. But I can't see why we can't.

Murray Robinson 

These fine distinctions don't make sense to me. There's a need for specialised skills. There's a need for careers and there's no problem with it. What I object to is the hand over behaviour. This reminds me of our discussion about project managers. I don't have a problem with the project manager role. I have a problem with authoritarian project management behaviour. I thought we agreed on that last time. I don't have a problem with a QA specialist, I have a problem with a team that doesn't take responsibility for their quality. It's the handover behaviour that's the problem, not the name of the role.

Shane Gibson 

We agreed that the project manager's behaviour is important. You think it's ok to use the term project manager. I still don't agree with you on that. By using that term for that role, the behaviour comes with it. That's a thing that I see happening time and time again.

Murray Robinson 

This is nonsensical. You're putting a heavy load of assumptions on the idea of a role, which I don't think is there. Let me explain the way I set up teams when I was running the delivery function for a digital agency. I had a whole lot of people in project teams and I had to move people about to make sure that each team had the skills they needed. I found that a team worked well when it had a Product Owner, a Scrum Coach, four people who specialised in development, somebody who specialised in quality assurance, somebody who specialised in design and somebody who specialised in business analysis. I find that a very effective skill mix for an agile software development team. If the quality person left then somebody else in the team would step up and take on that function until we could get another QA person. I would expect that the team would talk about and decide amongst themselves that the BA can do it but the developers can't because they are the bottleneck. If we get the BA to do testing for a while we can still get a good flow of work through the team. But if the BA is the bottleneck, maybe the designer or one of the developers would volunteer to do it or share it. A developer might say “we're not the bottleneck so I will focus on automating the tests”. I've had that happen. That makes a lot of sense. That's the way I expect it to happen.

Shane Gibson 

I definitely agree. What you're saying is that the team members who have the strongest skills in quality assurance will jump in and do testing tasks. Sometimes, a person who's not even the strongest in that skill set will pick it up because they're the freest person available. Those are all good behaviours, as far as I'm concerned. When I work with a team the next best person who's available does the work to unblock it. So we don't talk about roles, we talk about the skills on the team.

Murray Robinson 

If I was going to assemble a team like that I would advertise for a business analyst, a QA, a designer and four developers. I'd use those role names because I want people who have a lot of experience in those fields. I'd be looking for people who can be flexible and malleable but I would still be looking for a specialist. 

Shane Gibson 

Specialist skills. I agree.

Murray Robinson 

If you don't use a role name to describe these jobs you're going to have a lot of trouble communicating the special skills you're looking for. It doesn't make sense for me to advertise for a Scrum developer who has 10 years experience in quality assurance and testing but technical development skills are not required. That doesn't make any sense.

Shane Gibson 

But you wouldn't say that you'd say I want a person who has 10 years software development skills and strong testing skills.

Murray Robinson 

Well, then you're going to get a software developer who's got some experience in testing. People are not going to understand what you want and you'll get the wrong people. I would say that I want a QA to work in an Agile team. Someone who is flexible, T shaped and able to help out with business analysis and automated testing. Someone who will help the whole team improve the quality of their work by defining the definition of ready and done. Someone with skill using Cucumber to automate integration tests in a Java environment. If I ask for that then I'll be able to find somebody that will fit in well. I might ask them to show me their Cucumber BDD automation skills but I won't ask them to do a Java code test because it's not necessary. If I define a QA role like that then I'm going to get the right person because people will know what we're looking for.

Shane Gibson 

Because you've described the skills you need perfectly but you haven't described a role.

Murray Robinson 

There's no difference in my mind. 

Shane Gibson 

To me there is but let's agree to disagree on this one, because we're both saying the same thing. We are discussing my hate of the word role, and how I prefer to use the word skills.

Murray Robinson 

Okay.

Shane Gibson 

But what we are saying is that those QA skills are often light on a team and we need to make sure we beef them up and bring them into the process as early as possible. We need those skills to help the team define the acceptance criteria so that they're clear and testable. If we do this early we get a better quality product out the end.

Murray Robinson 

We agree on that.

Shane Gibson 

Those skills should be in the team, they should not be in another team that we hand work over to.

Murray Robinson 

Absolutely, they should be sitting next to the person who is skilled in development. When the person who is skilled in development thinks that they have finished, they should ask the person who's skilled in testing, to double check their work and give them another point of view.

Shane Gibson 

Or they could have the person who is skilled in testing sit next to them as they're doing development and use it as a pair development process. So as they're writing code the testing person could say “What tests should we use to make sure that this KPI calculation is meeting the requirement?”

Murray Robinson 

That could help if the team thinks it's efficient. I often suggest developers do a walkthrough with the person skilled in testing before they finish development. A tester can often tell if something works or not without going through a formal testing process.

Shane Gibson 

And that comes back to whether you do pair programming or not. I've found that having two people with a mix of skills, developing a piece of code is a much better outcome than one person working on it, and then engaging a second person on a walkthrough or peer review.

Murray Robinson 

It's a big topic of its own, we should put pair programming on the agenda for another time.

Shane Gibson 

Pair programming improves quality. It loops back to what patterns and practices are you using to increase the quality of every step. Don't think quality is testing at the end of the process.

Murray Robinson 

There's something seriously wrong with the idea of a hardening sprint at the end of a release cycle. SAFE has this. It's a sign of an acceptance of poor quality. You're assuming that quality is going to be so bad that you need a sprint or two at the end to fix all the things that you didn't fix as you're going. I think you should fix a problem immediately because It's much easier than trying to fix it later. Is that your experience?

Shane Gibson 

I'm not a big fan of hardening sprints. We often need a technical debt sprint to fix stuff that needs to be fixed. It should have been done in the initial piece of work, but it wasn't. Now we need to slice off some time from the development team to make changes. Those changes have nothing to do with a new part of the product or a new piece of value. We didn't do as good a job as we should have so we have to tidy things up. That technical debt is a cost. We lose development time to change things. We shouldn't have these problems but we do end up having them.

Murray Robinson 

But, you shouldn't accept technical debt as you're going. I was a developer once. If a developer is in the code and they spot a defect, they can fix it in a few minutes. If somebody else finds a problem the next day they've got to load that code back into their mind so they can understand the problem. That's not easy. It takes some time to get that code back into your head again which adds an extra cost. If a developer is asked to fix a problem in code that somebody else wrote a few weeks later then it takes 10 to 100 times longer to find it and fix it. It's much more efficient to fix all these problems straightaway. It's not worth delaying them. It's the same with technical debt. If you've got a security problem or a performance problem you should ask yourself if it could be a problem in production. If it's important to the product owner. We don't need to get the high performance yet. That's okay. But if it's like a real technical debt thing, we need to fix it straightaway. Because it's easier, more efficient, quicker to do it to fix code while you're working on than it is to try and fix it later.

Shane Gibson 

These are good indicators. If we're seeing a team having hardening sprints and technical debt then we know we've got a problem with quality and we need to change something to make it better. I'm undecided whether having windows where the developers can sit uninterrupted increases quality or not. We know that task switching; when you've got to unload what you're thinking about and think about the next problem and then reload what you're working on, slows developers down. But we also know that constant collaboration gives us a better product. I haven't worked out whether it's better to have days or blocks where you have no meetings so that the development team can focus without interruptions.

Murray Robinson 

The team should be able to decide on this themselves. They know what's working or not working. They can discuss it in their retros and try things out. For Instance, I always suggest that developers do face to face (or virtual) code reviews with another developer on the team.  But the team should decide themselves who does it. That seems to work out.

Shane Gibson 

A coach could suggest some things they've seen other teams do and ask if they'd like to try or if they've got something else in mind. The team can solve it themselves, or refer to a bunch of patterns and practices that other teams have used. Then they can experiment with them to see which one works for them.

Murray Robinson 

Two things I'm thinking of before we wrap this up. One is that the benefit of the quality practices we're talking about is much greater than the cost of it. It's a tremendous saving in terms of time, money and effort to do things properly than it is to produce crap and fix it later. This is one reason why waterfalls are not good. Waterfall development processes find an enormous amount of quality issues at the end, because nothing's tested until the testing phase. The cost of defects is enormous at this stage. If a project is going to blow up, it's going to blow up during testing. That's when you discover that, it's going to take twice as long and cost twice as much as you thought it would. The benefit of building quality from the beginning is high. So there's no reason not to do it. I take it that you agree with that before I go on?

Shane Gibson 

I do indeed.

Murray Robinson 

The second thing is that there should be a role and a career path for quality assurance specialists. It's not about testing other people's stuff. It's about helping the whole team improve their quality. Helping them improve the way that they do things by having definitions of ready and automating testing. It's about finding systematic problems and helping the team improve. It's very helpful for organisations to have a structure where the people with specialised quality assurance skills get together regularly to share ideas and information with each other. It's good for them to have a mentoring and apprenticeship programme. It's good for them to have a community of practice to share knowledge of things they've learned on their team.

Shane Gibson 

Or a guild. That's why I liked the term guild when it came out. It has that sense of a carpenter's guild or a plumbers guild. If you need to increase your maturity in a specialist skill you can find people with the same skills that can help you. I'm a great fan of that. Baking quality assurance into the team is the important part. I remember a waterfall team many years ago where a developer was always proud to say that 70% of his code always worked and the testers always found the code that didn't. That's not the behaviour we want. When we have an Agile mindset we want everyone to be responsible for the quality of their work. And we want to help them increase their skills where they don't have it so other people can help them with the effort when they need it.

Murray Robinson 

Quality is the whole team's responsibility. Even if somebody is a specialist in that area, it's still the whole team's responsibility. It's wrong for people in the team to say, “The tester is responsible for quality”. The whole team has to be responsible for quality.

Shane Gibson 

I'm sure we can find a TQM quote that says that quality is everybody's responsibility.

Murray Robinson 

Quality is free. There's another one I agree with.

Shane Gibson 

So, in other disciplines like manufacturing, they've learned that fixing a problem at the point that it is broken makes everything more efficient, and produces a better quality product.

Murray Robinson 

The review and retrospective process in Scrum also builds quality into the process. That's a continuous improvement process which is very helpful and important.

Shane Gibson 

Yet, we still see people adopting an Agile way of working where the development team hands over the work to the testing team.

Murray Robinson 

I've seen that recently with a big four consulting company that were working on a huge project for a client. Somehow they thought it was Agile to do four months of design sprints, then four months of development sprints, and then months of testing sprints. A lot of people are still surprised by the idea that testing has to be done in the sprint with development and it's not done until it's tested. People are astonished. People tell me that they've been doing Agile for years and they’ve never done that. 

Shane Gibson 

Shall we just say it's hard?

Murray Robinson 

It's not hard. These things are easy. What's hard about it?

Shane Gibson 

Well it seems that everybody’s struggling with it.

Murray Robinson 

It's a structural issue. It's only hard because you've got an empire building development manager and another empire building test manager fighting over control of resources/

Shane Gibson 

And they like to hire testing roles

Murray Robinson 

There is nothing wrong with testing roles.

Shane Gibson 

I'm going to close out on that one. I'll have the last word.

Murray Robinson 

That was the no nonsense Agile podcast from Murray Robinson and Shane Gibson. If you'd like help with Agile contact Murray at evolve co that evolve with zero. Thanks for listening.