A day doesn’t pass when I’m asked what I’m doing to “improve quality”.
Nine times out of ten the asker does not have a clear definition of quality – and as such a definition could range from “improving patient experience” (something devilishly difficult to define, never mind measure) through to improving “outcomes” (equally – often difficult to pin down). It is not infrequent that the notion of “cost” is left out of the construct of quality (that’s a different department), or an assumption is made that “if quality improves cost will be increased”.
My preferred starting points for defining the term are the classic pieces from Donabedian and Maxwell – often imitated but rarely bettered. However in practice, there are many different views of what “quality” is. These range from an organisational perspective on all care through to an individual clinical perspective or a patient satisfaction or outcome perspective. All four constructs should be built into a definition of quality .
Then there is a notion of incorporation of cost into efforts to improve quality. This enters the territory of value. Critically the benefits of improved quality AND the implementation cost of effort to improve that quality need to be factored into an equation about whether something achieves more value.
The “how” to improve quality is largely the territory of Quality Improvement. There is a large and very varied body of literature on Quality Improvement, and the strategies that might be adopted to “improve things”.
These tend to focus on a traditional approach to health systems where the model is “if only we had more knowledge then we’d all do the ‘right’ thing”.
Obviously this is not always the case, as evidenced by our less than perfect outcomes.
The standard approach
The standard approach to “improving quality” is often the passive “measure our performance against a benchmark, develop a local pathway and send it out to our practices” model.
The “send out the pathway” approach will achieve limited to no change all the evidence of that effect – 2-4% relative change. With an active approach some have achieved a 33% change….. I’ve written about this comprehensively elsewhere –
The literature on quality improvement is vast and complex. Some key references (imo) are appended
It is important that we also look beyond the “normal” factors where we target most of our improvement efforts; and consider the influences on behaviour that are beyond the things that NICE (and other clinical guideline bodies) look at (ie only knowledge by and large).
Here’s my take on 15 thoughts on QI skills, methods and context beyond simple knowledge improvement
1. Skills – do I have the right skills to enable me to implement the guideline. Do I know how to deal with a “difficult case that doesn’t quite need referring”
2. Environment – do I have enough time in a standard consultation to cover off all the things I need to.
3. social cues and social norms – what are my peers doing (or not doing), what’s happening in the broader social environment, “all your peers are doing….”…………
4. self efficacy – do I believe I have the ability to improve, is improvement something that is within my control, something I can influence
5. data –
- is the system helping me, is it supplying me with data to show how my practice compares to the best – and by how much I’m improving
- Is this data public and benchmarked (with appropriate caveats).
- Use data and tools to link to audit and revalidation – make the data serve multiple purposes.
Use Computerised searches to automate population management
- Chunk populations into manageable chunks
- Make it easy, and automated
- Helps find and efficiently focus on whrre to look
6. be mindful and wary of the limitations of routinely available administrative data. For example QOF is a great mechanism for paying doctors. It is less good for epidemiology, QI or giving a detailed understanding of a system .
7. Benchmark, audit and feedback. once you have got good data that truly represents your target(s) of interest.–
- benchmark, test, retest.
- Show how people are improving relative to their peers.
- Use data to set up a facilitated quality competition.
- Use tools like the Achievable Benchmark of Care to set targets relative to the best.
Audit and feedback reports – some thoughts
- Format – where you compare, remind of key clinical messages, consequences for patients, top tips, blank action plans
- Ensure depth of reach in the practices. Don’t just send it, follow it up. Need to keep at it. Some really engage with it, and use widely and deeply within the practices.
- Two things seem to help – 1) administrative support & 2) small numbers of patients (e.g. Risky px) – underscores the importance of chunking populations into manageable chunks.
- Correct the disconnect with practice managers and clinicians on what’s most important. Be mindful of Divergent views within practices also have a bearing on “reach”
- If there is a Knowledge deficit – use a report works to improve the knowledge
- Benchmarking data works by itself – esp own improvement over time and compared to others. But needs to be achievable and motivating. In absence of improvement the benchmarking reports over time can be demotivating
- The data has got to fit with other pop management machines – e.g. QOF- in the practice.
8. tools and system – is there a standard template or pathway that will suit the vast majority of my consultations, is it one side, is it available with a very small number of clicks. Can it be boiled down to a single side – “if I remember nothing else, I must remember this”.
9. changing patient expectations
- are patients demanding of me the care they should be getting. To what extent is someone modelling and influencing patient expectations so they know what they should demand.
- Do the PATIENTS in a poorly performing system KNOW they are in a poorly performing system.
10. Is “it” (the target) seen as a “safety” issue – people pay more attention to “safety” than they do to “quality”, it is seen as more “urgent” and “must be sorted”
11. “never events” and “always events” – we are familiar with “never events” – things that should never happen. Should we also explore the notion of “always events” – things that should always happen when there is an indication.
12. Flipping the indicator – for example instead of considering who SHOULD be anticoagulated – consider which groups should NOT be anticoagulated and then the default assumption should be anticoagulate unless a good reason not to.
13. harness the power of positive deviance .
- As opposed to focusing on the negative.
- Focus on what goes wrong vs what goes right. Proactive approach.
- Allow solutions to evolve over time.
- Focus on the development of well performing TEAM without reason for “managerial intervention”.
- Promote “effectiveness” rather than focus on the negatives of variability.
- Consider improvement over time, as opposed to a point estimate of “how bad things are”.
14. actively support units and communities of practice.
Don’t just provide tools and data. Go and visit them, understand their world and pressures. Use as an opportunity for knowledge facilitation and sharing.
On Educational outreach
- · Fitting into what is already rolling in qof and other mechanisms
- · Focus on who needs to be involved in what education
- · For anticoagulation prob need emphasis on GP, for diabetes mainly nursing.
- · Riding on the wave of new guidance is good bet and well supported by practices
- Use to Reinforce messages of audit and feedback
- Identify and address barriers as a group within the pract
15. be specific with prompts in Clinical Decision Software
- nuanced to if the practice nurse is running a diabetic clinic versus a GP in morning surgery.
- Or differential prompts for high and low achieving practices.
Appendix – taxonomy of QI strategies.
Powell et al neatly summarised the different approaches that might be taken to this. Figure 2 of the Powell paper provides a taxonomy of implementation strategies (appended).
Local needs assessment
Readiness for change
Identification of barriers
Visit other places
Local buy in and opinion leaders
Develop local relationships
Get formal commitment
Develop academic partnerships
Develop local materials
Develop a glossary of implementation
Conduct educational events
Make training dymanic
Conduct outreach visits
Use train the trainer strategies
Ongoing consultation – access to “local experts” to disaucss “difficult or atypical cases”
Inform through local opinion leaders
Shadow other clinicians
Alter financial incentives to stimulate the result you want
Use capitation or block budgets
Revise professional roles
Create new clinical teams
Change service sites
Change physical structure and equipment
Facilitate relay of clinical data to providers
Quality management strategies
Develop quality monitoring system
Develop tools to enable quality monitoring
Audit and provide regular feedback
Use advisory boards and work groups
Obtain and use patient feedback
Centralise technical assistance
Provide clinical supervision
Intervene with patients directly to enhance uptake – do patients know they should be offered AC
Purposefully review and re-examine implementation
Conduct small scale cyclical change – PDSA cycles
Capture and share local knowledge
Organise clinician implementation team meetings – how are we doing
Policy context strategies
Change accreditation or membership requirements
Change liability laws
Create or change credentialing or licensing standards
Michie et al (ref below) provide an excellent alternative framework for behaviour change as can be applied to QI. They encourage us to focus both on internal and on external factors that modulate behaviour. Table 1 is especially useful in helping us consider how psychological constructs can be harnessed to improve practice.
Milbank Q. Dec 2005; 83(4): 691–729. doi: 10.1111/j.1468-0009.2005.00397.xPMCID:
Maxwell. Br Med J (Clin Res Ed). May 12, 1984; 288(6428) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1441041/
A Compilation of Strategies for Implementing Clinical Innovations in Health and Mental Health Med Care Res Rev. 2012 April ; 69(2): 123–157. doi:10.1177/1077558711430690. Powell et al. http://www.ncbi.nlm.nih.gov/pubmed/22203646
Qual Saf Health Care 2005;14:26–33. doi: 10.1136/qshc.2004.011155. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1743963/
Rebecca Lawton et al positive deviance doi: 10.1136/bmjqs-2014-003115 http://qualitysafety.bmj.com/content/early/2014/07/21/bmjqs-2014-003115.full