Quality, the reactive measurement

Having worked in IT for quite some time now, more than once teams had to dig into quality and how much would it improve by doing this or that. The main issue there was that no matter what we thought on it wouldn't "improve quality" because quality is the client's perception of our work and that had a lot to do more with the advertisement we do to the product rather than the amount of good work we do, right?

Let's try to clarify it by explaining what I learned in my experience.

What is quality?

Many times you might have found managers concerned about the quality of the product their teams are delivering, asking the teams to take ownership about it, but right away this self organized teams find themselves questioning themselves, what is quality really?
Wikipedia will tell you that it's a pragmatic interpretation as the non-inferiority or superiority of something; it's also defined as being suitable for its intended purpose (fitness for purpose) while satisfying customer expectations. Quality is a perceptual, conditional, and somewhat subjective attribute and may be understood differently by different people.
So it might be difficult to create a standard measurement out of it that covers all it's approaches, but we might have those approaches analyzed separately to be able to take action.

In 1984 an MIT guy called David Garvin, with a specialization in the quality matter made a differentiation between 5 different approaches to quality:
  • Product approach
  • Production/Process approach
  • Transcendental vision
  • User approach
  • Value approach
Product quality: Is quantifiable in the ingredients or attributes it has, like rugs measured in knots per inch the more knots, more quality. A software product could be measured in the amount of features or it has, or its benchmark values. In software you could compare two products to define which one is better under this approach.

Quality in the production, focus on the process: The process is reliable in what the creators had envision it to have: controls/standard/model that well implemented mitigates the probability of risks, having accordance to the requirements. You can easily imagine organizations that produce low quality product with high procedure standards, take a worldly known fast food company for instance... no one can say they do not have quality in what they create. This most commonly apply to manufacturers which expect the same result once and again, although we might take some of this for later in this entry, as we can (and should) aim to be predictable with our process if we want efficiency in the time and budget invested on it.

Transcendental vision of quality: the quality of a product or service as an inherent characteristic that is both absolute and universally recognizable. Imagine brands that when you see them you immediately recognize them as creators of good quality products, the fact that you do this relation in your mind is due to this approach of quality analysis. As we say in Argentina, "make fame of yourself and you'll be able to sleep in your laurels". Spoiler alert: we won't be analyzing this approach.

The user approach: This approach is based on the premise that quality is “in the eye of the beholder,” where the beholder is the user. According to this approach, quality is the degree to which a product or service satisfies the user’s needs, wants, or preferences. The issue here is that you won't be able to find a standard, and you'll need to walk this path with the user that will measure you. But that is something we aim to take care in agile up to a certain point (Remember, don't give in to feature blackmail, as in "give me the cherry on the top or I won't use the product").

Value approach: Features/Cost. The more benefits outweigh costs, the more a product or service increases in value. Products or services with higher value enjoy higher quality. As a result, the product or service that performs best may not provide the highest value and so will not be the highest quality.

Quick reality check: Why testing?

Let's imagine you are John Ford and have a car factory. If a client buys your car and the break does not work, he will likely crash, and if he lives he will not buy you again (there goes the transcendental vision of your product). So validating the product you are building works properly starts to become key.
Another proven fact is that the sooner you find the problems, the lesser the cost you'll have to fix it.
Testing the breaks right after you've place them in the bodywork and fixing it right away might be cheaper than waiting for the whole car to be done. and that is also cheaper than giving it for the customer to test.

When do you talk about quality?

You talk about quality when something needs to be compared in order to make a statement. In software development is used to compliment or berate the work done. You could say the quality of this or that is good or poor in regards to what you would be expecting, compared to what you have right now. But in any case you will always be analyzing something that is already done. This means that you'll gather feedback reactively on the end product.
Now if you are already delivering to production at a short interval, the fact that this is reactive is not really an issue: you are adjusting and learning under each small increment. But you'll still need some quality control.

Just to be clear, Quality Control is the process you do right after you've created a functionality, and you are validating that it works according to the specifications, it's testing, plainly speaking. You have several ways to do testing, both manual and automated, integration, regression.
Quality Assurance in the other hand is all the set of working agreements and processes you place along the supply chain that will improve the possibility of delivering a working product according to specifications until you deliver the product to the market, among them you will find quality control, as well as other things like code reviews, pair programming, code merging procedures, environment healthcare agreements, testing coverage and performance monitors, SOX regulations, etc. Most of them can be measured proactively, and will help you mitigate the incidents, which is always cheaper than having those happening as we commented before. The only con on those is that those are indirect controls to the actual measurement of quality, doing more of them won't automatically ensure a perfect product.

Are there any good reactive measurements?

When you have a project in progress, Quality is usually directly related to both the process and the customer satisfaction. Things that might change the perception of the quality are (among others)
  • the amount of defects captured by the client/user (UAT or even prod)
  • the amount of prod support (in time and/or impact)
  • the user satisfaction
Remember this are things that might be useful to measure, but are not metrics yet, you'll need to define them.

What is important on quality for the company from the product building side.

Get early feedback, and evolve quickly

If as we discussed Quality is a reactive measure, we might as well try to reduce its cycle time to get results earlier. Here I'll share you some tools that you can use: 
  1.  In terms of product evolution as POC, Prototyping, MVP.
  2. And defect mitigation with focus groups, like Beta testing (and Alpha testing for that sake).
  • POC or Proof of Concept: It's often a sub project that will help you define if something can be done or not. It does not have any focus on its usability. 
  • Prototype: It's a more close to the final product delivery, used to focus on how it can be done, to orient and ease up your development process, finding early defects and blockers.
  • MVP or Minimum Viable Product: It's the minimum product actually delivered to the market, with it you can get an early feedback from all your customers to define how to keep evolving it.
  • Beta Testing: It's getting a really small set of market users to start trying your product so that you can get earlier feedback from the actual people that will interact with your product. (Alpha testing is getting a similar-to-production environment to release your product for your IT teams to use it and get you feedback).

Have feedback from the user, but focus on your vision

John Ford would argue that if users opinion and feedback was all he needed to care, he would still be looking for faster horses.
Feedback from the users should one of your main orientation tools, in order to keep the product useful to those that will work with it. But you need to have in mind the vision you have for the product, and reach a consensus regularly with the client on what they could expect in each short delivery by leveraging then the users approach.

Measure as a first step for continuous improvement, and it compare against itself

If you say you have 55% test coverage is it something good or bad? By itself it won't tell you anything, you need to analyze the trend, if last time you checked you had a 50% coverage, you are improving your process. Get the teams to make focus on that with the scientific method.
Make sure you don't set a particular value expectation for a metric, don't turn it into an objective.
If you do, you will get people aiming to get that objective instead of the product goal, for example covering tests in an un-useful way, for the sake of getting the metric look good, fast.

How to start

Asking the team to improve their quality without specifying what are you expecting to see from them might get you a broad set of potential answers out of which some of them could be far away from what you originally intended.
Here's a set of steps you can go by to make this happen:
  1. Define a vision of quality. Have the teams participate and challenge what was your original idea of quality.
  2. Get the team to define how they want to measure it. Empowering the team to defining the metrics is crucial to make this continuous improvement process something regular and stable.
  3. Get a stability analysis period. The numbers you are gathering need to be stable as well. Make sure those are stable enough that you can start experimenting on.
  4. Start PDCA cycles.

Comments

Popular posts from this blog

Estimation: When predictability is defined by the deadlines

Measuring success in companies pivoting to agile

Communication, one of the agile pillars