Quality is notoriously hard to measure, with the result that many publishers don’t measure it at all. The only answer we have is ‘did the book sell?’ Here’s the tale of how it went one time I tried to embed quality in business planning.
Quality is always a divisive topic. Editors are (as I’ve said before) quality people. They make sure that what is published won’t fail for foreseeable reasons related to the content. Marketing and sales are also, of course, quality people. They’re there to make sure that what is published finds suitable customers and will enhance the company’s brand.
The problem is, of course, one of cost. Most publishers can’t afford the time and money needed to guarantee perfection in product or positioning (if such a thing was even possible). Most teams in publishing are too busy to devote the time needed for every product to reach its full potential.
The nature of a publisher’s business, then, is to balance product quality against price so that its customers feel that they are getting value while the publisher makes enough revenue to survive.
In striking this balance, it’s important to understand what quality actually is. And an important starting point is to agree what it is not.
Quality is not an absolute – it’s not an objectively reality that everyone can agree clearly.
Indeed, it’s extremely difficult actually to measure quality. It’s often a matter of ‘I know it when I see it,’ which makes it hard for teams to be sure both that they’re heading in the right direction and that they’ve gone far enough.
This isn’t to say that quality is entirely subjective. There are some aspects that can be measured and controlled, and these include some of the most important aspects of a publication.
In one of my previous jobs, the editorial team I was leading had a problem. It was a fairly common one – management’s only definition of quality was that the product be ‘fit for market’. Of course, this is astonishingly vague because there was no guidance on what the market needed or how to determine whether a product was, in fact, fit.
So, having failed to get leadership from outside the team, we spent time coming up with our own framework for what quality might mean. We ended up with a mindmap covering a range of headline areas (like ‘physical product’, ‘language’, ‘cultural issues’ and ‘design’), each broken down to specific issues that could be measured and thus managed.
The key was being clear with ourselves about how we could actually measure these different aspects. Something like ‘clarity’ is both subjective and impossible to measure. But ‘language level’ does have reasonably objective schemes of measurement. The Council of Europe, for example, has produced the Common European Framework of Reference (CEFR), and the USA has the Interagency Language Roundtable scale (ILR). Even Microsoft Word has a built-in reading-age checker.
Similarly, issues under cultural sensitivity are often reasonably clear cut. Products aimed at a substantially Islamic market avoid pictures of pigs or unclothed human beings, for example. In some countries, showing the soles of your feet is offensive and photos of such would be avoided. The key here is knowing which markets your product will be sold into!
Having a plan for measuring quality was the first step but, as I said above, quality is not an absolute. ‘Fit for market’ is actually a reasonable guideline – it’s just not specific enough by itself to be useful. For the business to agree the quality was needed for each product and hence to manage it, we needed to do two things.
The aim was to build a chart that could be included in the approvals process for every project. When bringing a project for approval, the proposer would indicate the key areas where quality was important for this product (based on market knowledge) and rank them in order of importance.
With such a chart, the operations teams would be much more able to control projects as they went along. If the schedule was threatened by some unforeseen event, they would know which aspects of quality could be allowed to slide and which could not. And if a project had a particularly limited budget, they knew where to focus their efforts.
These conversations about priorities cannot happen within one team. They have to involve the whole business, which means getting commitment from the whole business and agreement that this exercise is actually important.
And that was where the whole thing fell down. Because, although we made excellent progress in drawing up our quality metrics, and even though we did have conversations with other teams to fit quality metrics to customer needs, we never achieved enough management buy-in to actually get the plan implemented.
The editorial team had identified a problem that affected the whole business and proposed a solution to it. We had even included Sales and Marketing teams to try and make sure that what we were proposing was possible and reasonable.
Our problem was one of management. As a team, we didn’t have enough clout with senior management to push through the (small) changes to our processes that would have been needed. And the reason was, I think, that those managers either didn’t realise or didn’t care that our publishing programme was largely unmanageable. To that point, the editors had dealt with it by working extremely hard to make sure that all products avoided all quality issues (as far as we could). But with the workload expected to increase significantly, that didn’t seem to be an option any more.
Which leaves this blog post closing with questions. Have you successfully managed to incorporate quality into your business planning? Have you got people to agree what they actually mean by ‘quality’? Do you even think it’s important?